From The Desk of...The Chief Scientist

"Radical Empathy"

Written by Paul Sutter on Monday, 26 February 2018. Posted in From The Desk of...The Chief Scientist

Recently I was invited to appear on Midnight in the Desert, a late-night radio show hosted by Heather Wade and usually featuring topics such as UFOs, time travel, energy beings, bigfoots (bigfeet?), and more. Honestly, I was torn. One on hand, I tend to distance myself from those subjects - I'm here to communicate what we know about the universe through science, after all. And those topics are...well, less than rigorous.

On the other hand - and the reason I ultimately accepted the offer - this is a new audience. I could stay in my comfort zone and talk to the same people who accept the same worldview that I espouse, or I could take a chance and speak to people who might be unaccepting of, and perhaps even hostile to, my values.

It was a risk - personally, professionally - so I took an approach of radical empathy. At the top of the interview I said I respected the host and the listeners, and I wasn't there to tell people what to believe. The folks I spoke to had deep, sincere, heartfelt, emotional experiences. Who was I to stomp all over that?

Instead I explained why I, as a scientist, must maintain a high skeptical bar for my own beliefs, and how we could use personal experiences to explore all sorts of cool physics about the world around us and test the limits of our knowledge. Swapping stories, if you will.

Now I'm not trying to hold myself up as some paragon of science communication virtue, but I took a risk and it paid off.

The host and callers were a blast to chat with, and the show was a wonderful opportunity to share some science. At the end of the broadcast, Heather told me that I was the first scientist to come on the program and not act smug, contemptuous, and condescending towards the topics and listeners. I was disappointed to hear that, but not altogether surprised. I happily accepted her invitation for another visit in the future.

Food for thought.

"Statistics Brought to You by the Letter P"

Written by Paul Sutter on Sunday, 18 February 2018. Posted in From The Desk of...The Chief Scientist

Last week I mentioned an odd term, p-value, which is commonly used in deciding whether your results are worth mentioning to your colleagues and the public. Of course it has a strict and narrow meaning, and of course that meaning is abused and misinterpreted in discussions about science.

Let's say you're performing an experiment: a pregnancy test. The box claims 95% accuracy, and if you read the fine print it's referring to a p-value of 5%. You take the test's positive! So are you really pregnant, or not?

Unfortunately your urgent question hasn't yet been answered. A p-value compares the hypothesis you're testing ("I'm pregnant") to what's called a null hypothesis (in this case, "no baby"), and a p-value of 5% says that if you were *not* pregnant, there would only be a 5% chance that the test would return a positive result.

You might be tempted to flip this around and state that there's a 95% chance you're actually pregnant, but you would be committing an egregious statistical sin - and this is the same sin committed wittingly or unwittingly by science communicators and sometimes scientists themselves.

Here's the problem: what if you're male? The test can still say you are, because it's not answering the question "am I pregnant?" but rather "if I'm not pregnant, what are the chances of the test returning a positive result?" It's a low number - 5% - but not zero. Thus males can still get a positive result despite never being pregnant.

The p-value by itself was only ever intended to be a "let's keep digging" guidepost, not a threshold for believability. To answer the question you actually want answered, you have to fold in prior knowledge. Combined with a low p-value, a healthy female of reproductive age can begin to conclude that there might be a baby on the way. A male...not so much. In either case, the p-value alone wasn't enough, and announcements based solely on that number need to be viewed suspiciously.

"P-hacking the System"

Written by Jaclyn Reynolds on Monday, 12 February 2018. Posted in From The Desk of...The Chief Scientist

Science is hard. Scientists have to stare at mountains of data and try to figure out what secrets nature is whispering to them. There are innumerable blind alleys, dead ends, and false starts in academic research. That's life, and that's why over the centuries we've developed sophisticated statistical techniques to help lead us to understanding. But if you're not careful, you can fool yourself into thinking there's a signal when really you've found nothing but noise.

The problem is in correlations, or when two variables in your experiment or observation seem to be related to each other. Uncovering a correlation is usually the first step in "hey I think I found something," and so many researchers report a connection as soon as their experiment reveals one.

But experiments are often exceedingly complex, with many variables constantly changing - sometimes under your control and sometimes not. If you have, say, twenty variables that are all totally random, then by pure chance at least two of those variables will be correlated.

So when scientists fail to spot the correlation they were looking for, sometimes they start digging through the data until something pops up. And when it inevitably does - publish! But it was just a statistical fluke all along.

This practice is called "p-hacking", for reasons I'll get into another time, and it's a prime source of juicy headlines but faulty results.

<<  7 8 9 10 11 [1213 14 15 16  >>