**Traffic Alert!**
Due to some minor construction and races in the area, please be aware of the following:

From The Desk of...The Chief Scientist

"Disengaged"

on Monday, 24 September 2018. Posted in From The Desk of...The Chief Scientist

Putting it mildly, scientists are somewhat passionate about their work. And like any person with a passion for a particular topic, once you get them going you can't get them to shut up. I guarantee that if you walk into any random faculty office (please knock first), and start asking curious, honest questions, you'll be there for hours.

So why aren't scientists out engaging with the public more often?

There are of course a few challenges to public outreach from the perspective of a scientist: difficulty in translating the jargon, fear of saying something inaccurate, discomfort in presenting to large crowds, and so on. But scientists are some of the smartest people in the world - surely they can overcome challenges like that?

They can...if they're incentivized. But the modern academic system actively discourages public engagement. The number one focus for a researcher is getting grants, followed closely by writing papers. Teaching and serving on committees are also on the list...somewhere. I suppose they also like to spend time with their families.

What little time that does get devoted to outreach is usually done selflessly (with no net benefit to their tenure or advancement prospects) or to satisfy some thin requirement in a grant. Most scientists want to do outreach, but are limited in their opportunities. Unfortunately there are no easy answers to this dilemma, except to create as many frameworks as possible to make it easy for scientists and the public to connect. In other words, if scientists can’t come to the people, the people have to come to the scientists.

"In Experts We Kind Of Trust"

on Monday, 17 September 2018. Posted in From The Desk of...The Chief Scientist

It's a delicate balance. We want communities to trust, respect, and understand science. But science is a method, a process. The people who practice it are fallible. The results they produce are provisional and incomplete (at best) or flat-out wrong (at worst).

How can people honestly trust a method that, by design, changes its collective mind? How can people honestly respect a process that is, by design, more often wrong than right? How can people honestly understand a philosophical approach that, by design, steeps itself in arcane mathematics and jargon?

Let's start with trust. When we ask people to "trust" a particular scientist or result, we need to make sure that it's not the person itself that is necessarily deserving of trust, but the method and structures that they represent.

Through painstakingly meticulous work, agonizingly slow timescales, and incessant revisions we come to ever more-refined descriptions of the world around us. And that is worthy of trust. Only when a particular result is placed in the proper context - motivations, current knowledge, scope of limitations, etc. - can we show communities what it means to trust scientists.

"Check Your Bias"

on Monday, 10 September 2018. Posted in From The Desk of...The Chief Scientist

I talk about bias a lot because bias is kind of important. And as if it weren't already difficult enough to constantly be on the lookout for the ways that biases can sneak and slither their way into a dataset or analysis or presentation, there's another source of bias that is much more pernicious and insidious. Thankfully it's surprisingly easy to find its source: just look in the mirror.

It's so easy. You go into an experiment or observation or study or analysis expecting a certain result. You can't help it, even when you're trying to be impartial; it's human nature. And as soon as the data start to lean in a particular direction, or the analysis starts to confirm your suspicions, it's oh so tempting to call it a success, write it up, and move on.

But this is what gets you into trouble. What if you're looking at a false positive? What if there aren't enough data to justify your statistics? What if you made a mistake somewhere and overlooked it because you got the answer you thought you wanted?

Replication is one of the keys to the scientific paradigm, but we can't be lazy about it and just assume that someone else will do the boring repetitions for us. Nobody will be as close to the original data and setup as we are - it's up to us to perform the first rounds of repeating and cross-checking results ourselves.

It's easy to lie with data and give everything a veneer of respectability. And the person we most often lie to is ourselves. So one key to cracking internal bias is simple and straightforward: get more data, and do the whole thing again.

<<  1 2 3 [45 6 7 8  >>