A New Understanding on the Impact of Pseudoscience
Dr. Paul Zak shares his views on the crisis in confidence in psychology, science communication, and how to differentiate between sound science and overblown hype.
Published August 02, 2013
By Academy Contributor

A series of unfortunate events over the last few years has sparked a “crisis in confidence” in the field of psychology. This introduction to an Association for Psychological Sciences journal dedicated to the subject details cases of research fraud, misleading statistics, questionable research practices, and downright pseudoscience. Meanwhile, scientists and journalists have expressed frustration and fatigue with claims purporting to answer grandiose questions about human nature and experience with neuroscience. “A new branch of the neuroscience-explains-everything genre may be created at any time by the simple expedient of adding the prefix “neuro” to whatever you are talking about,” complains Steven Poole in the New Statesman. “‘Neuroeconomics’ is the latest in a long line.”
Paul Zak, PhD, is a neuroeconomist whose work focuses on the role of oxytocin (OT) in decision-making. He also engages in public education outreach efforts that draw some flak for presenting an overly simplistic version of the scientific story. In this interview, Dr. Zak addresses some of these issues and shares recommendations for discerning good science from hype.
There has been a lot of commentary recently on problems with psychology and neuroscience. Some have described the situation as a crisis in confidence. Could you comment?
That’s a broad area. There’s a difference between what scientists do on a day-to-day basis and what gets talked about in the media. Journalists will just read an abstract or talk to the author sometimes without fully understanding a study, and there end up being some oversimplifications or generalizations. We need to be careful with how we present our work. Nothing in the brain is simple. I don’t think there’s any crisis in the scientific field; the field’s just exploded. Like in any new area, of course there’s some bad work out there that needs to be sifted out. Confirmation is the key, but that’s why we do science—to find out what’s really going on.
In May’s Nature Reviews Neuroscience, there was an article about low sample sizes in neuroscience undermining the reliability of studies and turning out statistically significant results that aren’t representative of a true effect. How can neuroscience address this problem, given realistic funding constraints?
That’s a great question! There’s “the law of small numbers,” a joke (that’s also true) that if you do something with small samples you often to get results that will disappear with a larger sample size. We’re in a tight funding time period. If budgets were unlimited, we could all do experiments with huge sample sizes and the problem would be solved. But that’s not going to happen. The better solution is replication.
The American Psychological Association actually set up a committee and a journal to encourage studies to replicate results. Our lab’s work has been replicated a lot with good results. It’s honestly really scary when people contact you and want to replicate your work. You think, “Gosh is this gunna work out?!” I’ve done experiments with the same methodologies a year apart and, for whatever reason, those data will sometimes look a little different—sometimes a lot different. So I wouldn’t be surprised if in replication experiments, a bunch of work fails to be verified. It’s a worry, but it’s really important that we do this.
Also, a little humility would be nice. When you go to your doctor, you’re encouraged to seek a second opinion. It’d be nice to see the same thing with scientists’ presentation of research results. We should also change the incentive structure for doing replication studies.
This issue isn’t always acknowledged in the media, where conclusions are often overblown and oversimplified. How can scientists better communicate this issue and the processes by which it’s being addressed to the public?
Science is kind of like a new religion sometimes—it’s done by high priests whose words get passed down as facts. Internet media and open access journals are a great advancement in this area. Scientists and general readers can and do comment publicly on papers and it generates a dialogue. That’s what we should be moving towards. We’re all in this together. We as the scientific community should be better at listening to the public, and I think the public should be more skeptical about accepting what comes out of the journals and general media. Being humble, being skeptical, being honest, and listening to each other—these are good guidelines for scientists and the general public.
Your area of study, the effects of oxytocin on behavior, receives a lot of criticism. The moniker “Dr. Love” also draws a fair bit negative attention. What’s your response to this?
First of all, I don’t call myself Dr. Love. Magazines call me that. But I’m happy to have that persona. People who see my talks tell me they ask their spouses for hugs to get oxytocin. Is that exactly what’s going on in their brains? No, it’s not that simple. But is this a useful way to get people to think about the importance of connections and a good first step into thinking about neuroscience? Sure! A great thing about having a public presence is that it’s a place from which to encourage people to be more loving to each other. I’m happy to use a scientific pulpit to do that.
I tend not to defend myself. I think the work we do stands for itself. If people are skeptical, that’s good! You can read the research and draw your own conclusions. The papers are online. I’m confident in the studies we’ve done, and many of them have been positively replicated. Our lab is totally transparent. People come visit the lab. Other than the participants in an experiment, because we protect their anonymity, we’re totally open.
Again, though, there’s a separation between work in the lab and work with the public. From the general public perspective, I’m all about fun and creating interest. Backlash is just part of the gig. I think part of our job as scientists is not to be high priests, but to be journeymen. People are curious and we should help nurture that.
A paper by Michael McCullough et al recently pointed out vast discrepancies between the measurements of plasma oxytocin levels yielded by different techniques (radio immunoassays and enzyme immunoassays), and the issue that molecules are falsely identified as OT. These both seem like serious problems.
We’ve spent about a million dollars of your tax money working on these issues. There are two issues. One is whether the ELISA (enzyme-linked immunoabsorbent assay) is good. The RIA is much more precise at measuring OT than the ELISA, but it requires a lot more expertise and careful handling. The other issue is identifying OT correctly. The McCullough paper talks about OT levels.
That’s less important in my protocol because we we look at relative change in OT, as opposed to basing our conclusions on measuring levels. Levels are much harder to nail than changes. If a stimulus induces the brain to make OT, it’s presumably not changing the other stuff showing up as OT. We avoid the false ID problem by comparing levels before and after a stimulus, so we know if levels have increased or decreased relative to where they were prior to the stimulus.
Where’s the trade off between engaging the public and potentially misrepresenting the work of science? Do you worry that the actual rigor and complexity of the science is lost in translation?
That’s a great question and I wish I had a better answer. I think we have an obligation to get people asking questions. The stuff I do on TV isn’t sound scientific experimentation. It’s an illustration of a larger point. They’re related, but they’re not the same thing. I think it’s OK to show the public illustrations. I did a stunt on TV where I jumped out of a plane and drew some blood from my arm.
Are there problems with this methodologically? Of course! But this is just something I do to generate curiosity. The actual experiments we do are rigorous; these are just demos to spark an interest. The hope is that people will be interested and follow up with more formal investigations.
I’m lucky to work with a team of amazing individuals. I could spend more time in my lab and less time in the media and talking to people, but my lab works smoothly with excellent quality control and we’re well funded. I like being out with people. Exploring your world through science is fun, and I love sharing that. Think about everything as an experiment and just keep trying things. With the media outreach, that’s what I’m trying to get people to do.
Do you have advice on how to differentiate between sound conclusions, hype, and bad science?
Read publications with reputations for high quality and being trustworthy sources. If you’re getting an idea from some wacky website, check up on it before you assume it’s true. Do that in general, in fact. You have to be skeptical and pursue dialogue. Don’t just look at the abstract and jump to conclusions; do some of your own digging. Listen to the conversations around ideas.
A good rule is consensus. If the consensus in the scientific community is X, X is likely to be right. Look to see if a conclusion has been verified through replication. Read on your own. You can’t just read the abstract if you’re really interested in understanding something, (though a lot of us do that, even professionals).
Also read: An Illustrated History of Science Denial