Bay Area Skeptics

The San Francisco Bay Area's skeptical organization since 1982

SciSchmooze
Lili Galilean
9 February 2026


I am writing to you today with a mix of excitement and big shoes to fill. I am officially joining the SciSchmooze team, and my first order of business is to say a massive thank you to Herb Masters.

If you know Herb, you know he is the gravitational center of this community. Whether he was engaging the public as a long-time volunteer at the Exploratorium or advocating for critical thinking as a Board Member of the Bay Area Skeptics, Herb has always championed the idea that science should be accessible, rigorous, and—crucially—fun.

In fact, that is exactly how we met. It wasn’t at a stuffy lecture or a board meeting; I spotted a guy wearing a “cats in space” t-shirt, and I knew I had to talk to him. That’s Herb in a nutshell: deeply serious about science, but never taking himself too seriously.

Stepping into this role is an honor. For those we haven’t met: I am a physicist and materials scientist based here in the Bay Area. My work has taken me from studying the vastness of the cosmos to engineering the structure of new materials, teaching me one consistent lesson: science is rarely as neat as the headlines suggest.

I spend my days at the lab bench asking “how do we know that?”—so I plan to bring that same critical eye to this newsletter. I am excited to carry the torch Herb lit, focusing on reproducible science, timely innovation, and the questions that really matter.


The “Black Box” in the Lab Coat

There is a growing sense that we are entering a new Golden Age of discovery, where Artificial Intelligence will accelerate solutions for everything from climate change to disease. While these computational tools are undeniable accelerators, they also introduce a fundamental shift in how we generate knowledge. As algorithms begin to produce accurate answers without showing their work, the scientific community is grappling with a core tension: If we trade deep understanding for accurate prediction, are we still doing science?

We call this the Black Box Problem. In traditional science, if a theory predicts a new material, we usually know why (e.g., “this chemical bond is weak”). With deep learning, the algorithm effectively says, “Trust me, I saw a pattern in the data.”

There is a famous cautionary tale about this involving wolves and huskies. Researchers trained an AI to distinguish between the two animals, and it performed beautifully—until they looked inside the “Black Box.” They realized the AI wasn’t looking at the animals at all; it was looking at the background. Since most wolf photos were taken in the snow and husky photos on grass, the AI had simply learned that Snow = Wolf. It was statistically accurate, but scientifically meaningless.

We see this same “shortcut learning” in hard science, too.

In recent drug discovery studies, AI models tasked with predicting which molecules would bind to a protein target achieved record-breaking accuracy. But when chemists analyzed the logic, they found the AI wasn’t calculating hydrogen bonds or electrostatic fits. It had simply learned that “Bigger = Better.” As further research highlighted, these models often exploit biases in the training data, learning that larger molecules simply have more surface area to latch onto things. Consequently, the AI prioritized heavy, impractical compounds over the precise “lock-and-key” fit required for effective drugs.

A study released in late 2023 highlighted a similar issue in my own field: the “Perfect Crystal Fallacy.” Google DeepMind’s GNoME model claimed to discover nearly 400,000 new stable materials. However, when leading crystallographers scrutinized the results, they found a critical flaw: the AI was effectively hallucinating stability. By assuming “perfect” atomic structures and ignoring the disorder, defects, and messiness of real-world physics, the model generated thousands of compounds that, while theoretically stable, are chemically impossible to create in a lab.

This highlights the danger of confusing computational prediction with physical reality.

The Turing Award winner Judea Pearl famously argues in The Book of Why that “Data do not understand causes and effects; humans do.” He points out that while AI is excellent at finding correlations (the “what”), it is incapable of understanding the mechanism (the “why”).

Computer scientist Yejin Choi takes this a step further, describing common sense as the “Dark Matter of Intelligence”. Just as dark matter makes up the bulk of the universe yet remains invisible to our eyes, “common sense” makes up the bulk of human reasoning yet remains invisible to current AI models.

Until we solve that, we are handing over the scientific method to algorithms that can predict ideal molecules but might fail to notice why they are physically impossible.

My advice? Keep your wonder, but bring your skepticism. Science isn’t just about getting the answer; it’s about showing your work.

Have a wonderful, curious week.
Lili Galilean


My Top Picks for the Week

  1. Chatbots Decoded: Exploring AI
    Wednesday–Sunday | Computer History Museum, Mountain View
    Since I’m discussing the “hopes and fears” of AI this week, this new exhibit is a must-see. It allows you to interact with Ameca, a robot powered by GPT-4, and explore the history of “talking machines.” It’s the perfect place to confront the “Black Box” face-to-face.
  2. Stanford Energy Seminar: Methane and Hydrogen in a Warming World
    Monday, 02/09/2026 – 4:30 PM | Stanford University
    Rob Jackson (Earth System Science) presents “Recent Trends and Their Causes.” This is a timely look at emissions data for two critical gases—and particularly relevant given the current industry hype around hydrogen infrastructure. Event Details.
  3. Evolution Day 2026
    Friday, 02/13/2026 & Saturday, 02/14/2026 | UC Berkeley
    Celebrate the father of evolution at the Essig Museum of Entomology. This year’s “Evolution Day” festivities extend across two days, offering a rare chance to see behind-the-scenes collections and discuss the science that connects us all.

Upcoming Events:
Click to see the next two weeks of events in your browser.


Leave a Reply

Your email address will not be published. Required fields are marked *