Using It or Losing It? The Case for Data Scientists Inside Health Care

“As much as 30% of the entire world’s stored data is generated in the health care industry. A single patient typically generates close to 80 megabytes each year in imaging and electronic medical record (EMR) data. This trove of data has obvious clinical, financial, and operational value for the health care industry, and the new value pathways that such data could enable have been estimated by McKinsey to be worth more than $300 billion annually in reduced costs alone…Read More


Source

NEJM Catalyst: Using It or Losing It? The Case for Data Scientists Inside Health Care by Marco D. Huesch, MBBS, PhD & Timothy J. Mosher, MD

A Practical Introduction to Factor Analysis

A Practical Introduction to Factor Analysis: Exploratory Factor Analysis

Survey questions or “items” (e.g., on a scale from 1 to 5, how strongly do you agree with the following statement…) may be repeated measures of certain underlying “factors”. Where factors are a true underlying construct that a survey attempts to measure. For example, a factor a survey may attempt to measure might be the anxiety caused by learning statistical analysis (using SPSS software). Factor analysis looks at understanding what is really being measured by multiple questions in a survey.

There are a ton of new concepts in this class, but online resources are often a more simple and clear way to learn.

fig03b


UCLA Institute for Digital Research and EducationA Practical Introduction to Factor Analysis: Exploratory Factor Analysis

Simpson’s Paradox

From Wikipedia

“Simpson’s paradox, or the Yule–Simpson effect, is a phenomenon in probability and statistics, in which a trend appears in several different groups of data but disappears or reverses when these groups are combined. It is sometimes given the descriptive title reversal paradox or amalgamation paradox.”

This seems counterintuitive, but the 5 minute video below explains the concept well.


Source

Wikipedia: Simpson’s paradox

Minute Physics: Simpson’s Paradox

Best Data Science Courses Online

The Best Free Data Science Courses on the Internet

Data science is blossoming as a field at the moment. Popular jargon from traditional statistics to new machine learning techniques are used colloquially in both online articles and day-to-day exchanges. One of the excellent things about data science, noted by David Venturi, is that by nature the field is computer-based. Why not learn about it all for free online then? Venturi has written several articles enumerating lists of massive open online courses (MOOC) relevant to someone interested in only a single highly-ranked data science class, or a complete masters degree in data science for the more dedicated individual. One of the benefits of these courses is they are more poignant and focus on only the knowledge relevant to applying data science skills. Another perk is the nonexistent price tag, as opposed to the tens or hundreds of thousands of dollars of student loans one could thrust themselves into while pursuing a data science masters at a formal institution. Venturi explains why he left grad school to learn about data science before finishing his first semester. If nothing else, some of these courses may be useful to supplement a graduate school education.


Sources

FreeCodeCamp.org: David Venturi

FreeCodeCamp.org: The best Data Science courses on the internet, ranked by your reviews

FreeCodeCamp.org: If you want to learn Data Science, take a few of these statistics classes

Medium.com: I Dropped Out of School to Create My Own Data Science Master’s — Here’s My Curriculum

The 7 Deadly Sins of Data Analysis

In her final lecture, my statistics professor described the “7 deadly sins” of statistics in cartoon form. Enjoy


1. Correlation ≠ Causation

Correlation

xkcd: Correlation

CausCorr2_Optimized.jpg

Dilbert: Correlation


2. Displaying Data Badly

Convincing

xkcd: Convincing

Further reading on displaying data badly

The American Statistician: How to Display Data Badly by Howard Wainer

Johns Hopkins Bloomberg School of Public Health: How to Display Data Badly by Karl Broman


3. Failing to Assess Model Assumptions

FailingModelAssumptions.png

DavidMLane.com: Statistics Cartoons by Ben Shabad


4. Over-Reliance on Hypothesis Testing

Null Hypothesis

xkcd: Null Hypothesis

While we’re on the topic of hypothesis testing, don’t forget…

We can fail to reject the null hypothesis.

But we never accept the null hypothesis.


5. Drawing Inference from Biased Samples

DilbertInferences.gif

Dilbert: Inferences


6. Data Dredging

If you try hard enough, eventually you can build a model that fits your data set.

DataDredging_Optimized.jpg

Steve Moore: Got one

The key is to test the model on a new set of data, called a validation set. This can be done by splitting your data before building the model. Build the model using 80% of your original data, called a training set. Validate the model on the last 20% that you set aside at the beginning. Compare how the model performs on each of the two sets.

For example, let’s say you built a regression model on your training set (80% of the original data). Maybe it produces an R-squared value of 0.50, suggesting that your model predicts 50% of the variation observed in the training set. In other words, the R-squared value is a way to assess how “good” the model is at describing the data, and at 50% it’s not that great.

Then, lets say you try the model on the validation set (20% of the original data), and it produces an R-squared value of 0.25, suggesting your model predicts 25% of the variation observed in the validation set. The predictive ability of the model seems to depend on which data set is used; on the training set (R-squared 50%) it is better than on the validation set (R-squared 25%). This is called overfitting of the model to the training set. It gives off the impression that the model is more accurate than it really is. The true ability of the model can only be assessed once it has been validated on new data.


7. Extrapolating Beyond Range of Data

Extrapolating

xkcd: Extrapolating


Similar Ideas Elsewhere

Columbia: “Lies, damned lies, and statistics”: the seven deadly sins

Child Neuropsychology: Statistical practices: the seven deadly sins

Annals of Plastic Surgery: The seven deadly sins of statistical analysis

Statistics done wrong


Sources

xkcd: Correlation

Dilbert: Correlation

xkcd: Convincing

The American Statistician: How to Display Data Badly by Howard Wainer

Johns Hopkins Bloomberg School of Public Health: How to Display Data Badly by Karl Broman

DavidMLane.com: Statistics Cartoons by Ben Shabad

xkcd: Null Hypothesis

Dilbert: Inferences

Steve Moore: Got one

Wiki: Overfitting

xkcd: Extrapolating

Columbia: “Lies, damned lies, and statistics”: the seven deadly sins

Child Neuropsychology: Statistical practices: the seven deadly sins

Annals of Plastic Surgery: The seven deadly sins of statistical analysis

Statistics done wrong

Powered by WordPress.com.

Up ↑