How psychometrics and medicine have trouble communicating
“If the reader is to grasp what the writer means, the writer must understand what the reader needs.
“Science is often hard to read. Most people assume that its difficulties are born out of necessity, out of the extreme complexity of scientific concepts, data and analysis. We argue here that complexity of thought need not lead to impenetrability of expression; we demonstrate a number of rhetorical principles that can produce clarity in communication without oversimplifying scientific issues. The results are substantive, not merely cosmetic: Improving the quality of writing actually improves the quality of thought.”
“This is an introductory course in machine learning (ML) that covers the basic theory, algorithms, and applications. ML is a key technology in Big Data, and in many financial, medical, commercial, and scientific applications. It enables computational systems to adaptively improve their performance with experience accumulated from the observed data. ML has become one of the hottest fields of study today, taken up by undergraduate and graduate students from 15 different majors at Caltech. This course balances theory and practice, and covers the mathematical as well as the heuristic aspects. The lectures below follow each other in a story-like fashion:
- What is learning?
- Can a machine learn?
- How to do it?
- How to do it well?
- Take-home lessons.
“The 18 lectures are about 60 minutes each plus Q&A.”
Caltech: Learning From Data Machine Learning Course by Yaser S. Abu-Mostafa
Textbook: Learning From Data by Yaser S. Abu-Mostafa and Malik Magdon-Ismail
Centering continuous independent variables was one of the earliest lessons in my linear regression class. I was recently asked to explain, “what’s the point?” of going through the trouble of centering? I was at a loss, and realized I had been assuming the answer was obvious when it was not.
After a quick google, this article explained the answer well. In short, centering is useful when interpreting the intercept is important. Here example of age of development of language in infants. Her original article has been copied below.
Should You Always Center a Predictor on the Mean?
by Karen Grace-Martin
Centering predictor variables is one of those simple but extremely useful practices that is easily overlooked.
It’s almost too simple.
Centering simply means subtracting a constant from every value of a variable. What it does is redefine the 0 point for that predictor to be whatever value you subtracted. It shifts the scale over, but retains the units.
The effect is that the slope between that predictor and the response variable doesn’t change at all. But the interpretation of the intercept does.
The intercept is just the mean of the response when all predictors = 0. So when 0 is out of the range of data, that value is meaningless. But when you center X so that a value within the dataset becomes 0, the intercept becomes the mean of Y at the value you centered on.
What’s the point? Who cares about interpreting the intercept?
It’s true. In many models, you’re not really interested in the intercept. In those models, there isn’t really a point, so don’t worry about it.
But, and there’s always a but, in many models interpreting the intercept becomes really, really important. So whether and where you center becomes important too.
A few examples include models with a dummy-coded predictor, models with a polynomial (curvature) term, and random slope models.
Let’s look more closely at one of these examples.
In models with a dummy-coded predictor, the intercept is the mean of Y for the reference category—the category numbered 0. If there’s also a continuous predictor in the model, X2, that intercept is the mean of Y for the reference category only when X2=0.
If 0 is a meaningful value for X2 and within the data set, then there’s no reason to center. But if neither is true, centering will help you interpret the intercept.
For example, let’s say you’re doing a study on language development in infants. X1, the dummy-coded categorical predictor, is whether the child is bilingual (X1=1) or monolingual (X1=0). X2 is the age in months when the child spoke their first word, and Y is the number of words in their vocabulary for their primary language at 24 months.
If we don’t center X2, the intercept in this model will be the mean number of words in the vocabulary of monolingual children who uttered their first word at birth (X2=0).
And since infants never speak at birth, it’s meaningless.
A better approach is to center age at some value that is actually in the range of the data. One option, often a good one, is to use the mean age of first spoken word of all children in the data set.
This would make the intercept the mean number of words in the vocabulary of monolingual children for those children who uttered their first word at the mean age that all children uttered their first word.
One problem is that the mean age at which infants utter their first word may differ from one sample to another. This means you’re not always evaluating that mean that the exact same age. It’s not comparable across samples.
So another option is to choose a meaningful value of age that is within the values in the data set. One example may be at 12 months.
Under this option the interpretation of the intercept is the mean number of words in the vocabulary of monolingual children for those children who uttered their first word at 12 months.
The exact value you center on doesn’t matter as long it’s meaningful, holds the same meaning across samples, and within the range of data. You may find that choosing the lowest value or the highest value of age is the best option. It’s up to you to decide the age at which it’s most meaningful to interpret the intercept.
The Analysis Factor: Should You Always Center a Predictor on the Mean? by Karen Grace-Martin
“How not to collaborate with a biostatistician. This is what happens when two people are speaking different research languages! My current workplace is nothing like this, but I think most biostatisticians have had some kind of similar experiences like this in the past!”
YouTube: Biostatistics vs. Lab Research by JavaMama926
Covariance and correlation are two statistical concepts that are closely related, both conceptually and by their name. The excerpts below are from a concise article that differentiates them.
Difference Between Covariance and Correlation
“Correlation is a special case of covariance which can be obtained when the data is standardised. Now, when it comes to making a choice, which is a better measure of the relationship between two variables, correlation is preferred over covariance, because it remains unaffected by the change in location and scale, and can also be used to make a comparison between two pairs of variables.”
Key Differences Between Covariance and Correlation
“The following points are noteworthy so far as the difference between covariance and correlation is concerned:
- “A measure used to indicate the extent to which two random variables change in tandem is known as covariance. A measure used to represent how strongly two random variables are related known as correlation.
- “Covariance is nothing but a measure of correlation. On the contrary, correlation refers to the scaled form of covariance.
- “The value of correlation takes place between -1 and +1. Conversely, the value of covariance lies between -∞ and +∞.
- “Covariance is affected by the change in scale, i.e. if all the value of one variable is multiplied by a constant and all the value of another variable are multiplied, by a similar or different constant, then the covariance is changed. As against this, correlation is not influenced by the change in scale.
- “Correlation is dimensionless, i.e. it is a unit-free measure of the relationship between variables. Unlike covariance, where the value is obtained by the product of the units of the two variables.”
The Importance of Stupidity in Scientific Research
“Science makes me feel stupid too. It’s just that I’ve gotten used to it. So used to it, in fact, that I actively seek out new opportunities to feel stupid. I wouldn’t know what to do without that feeling. I even think it’s supposed to be this way.”
The more comfortable we become with being stupid, the deeper we will wade into the unknown and the more likely we are to make big discoveries.
“Productive stupidity means being ignorant by choice. Focusing on important questions puts us in the awkward position of being ignorant. One of the beautiful things about science is that it allows us to bumble along, getting it wrong time after time, and feel perfectly fine as long as we learn something each time. No doubt, this can be difficult for students who are accustomed to getting the answers right. No doubt, reasonable levels of confidence and emotional resilience help, but I think scientific education might do more to ease what is a very big transition: from learning what other people once discovered to making your own discoveries. The more comfortable we become with being stupid, the deeper we will wade into the unknown and the more likely we are to make big discoveries.”
The Journal of Cell Science: The Importance of Stupidity in Scientific Research