What the Heck is Structural Equation Modeling?

I took an entire semester on structural equation modeling (SEM), and I still only have a rather fuzzy idea of what it is. In short, this is the best technical definition I’ve arrived at.

A Brief Technical Definition of SEM

Structural equation modeling is a research method that combines factor analysis and path analysis to mathematically describe complex conceptual mechanisms in a medium size dataset.

Factor analysis measures latent traits indirectly from measureable variables by either exploring new models, or confirming hypothesized ones.

Path analysis is linear regression repeated where one outcome, or dependent variable, becomes the exposure, or independent variable, for the subsequent regression.

That may make some semblance of sense if you’re familiar with statistics and psychometrics, the field of using surveys to measure educational and psychological abilities. So what is SEM to normal, decent, God-fearing people?

A Conceptual Definition of SEM

Structural equation modeling is a loosly defined quantitative method of building and then testing a mechanistic model. If a researcher understands a concept, they can then propose a model, where one step affects another, which may then inhibit a third step, and so on. Once a model of sequential steps is described, it can be tested against similar models by various statistical methods. The inferior mechanistic model is discarded in an iteraive process as the researcher strives ever nearer to the truth.

Honestly I don’t think the above description is elegant enough to explain SEM to my grandmother. But it should be enough to communicate with other students and researchers.


What do SEM Models Look Like?

Okay, don’t lose me here. Just relax and kind of let your eyes out of focus. Don’t look at the details.

Image result for structural equation modeling

Whew, look away!

This is a google image of an SEM diagram. I’ve no idea what it’s trying to explain. And that’s kind of the fun of SEM. You look really smart, but you’ll probably have a rough time explaining your theoretical mechanism to anyone.

These are mechanisms, and they look like spiderwebs of circles and rectangles with arrows going everywhere. There’s a technical language to the SEM model diagrams that you can learn from some of the links at the bottom. But for now, just know how to spot an SEM model diagram in the wild.


An Application of SEM: Survey Data

SEM is a method that has a couple really great applications and many terrible ones. It’s great if you have a data set that is on the order of magnitude of 100 to 1,000 observations. The variables in the model should also be continuous, as dichotomous and integer variables lead to mathematical problems. For this reason, SEM is more common in psychology and the humanities rather than medicine or epidemiology…at least for now. In particular, SEM tends to be applied to survey data.

Survey data takes a bunch of items, adds them up in some fashion, and produces a continuous score. We all know this from elementary school, where if you got 79% on a quiz and your friend got 80%, then she was smarter than you and that was that.

While this might serve some purpose in school, it’s not great when assessing whether a psychiatric patient is competent to make decisions with their physician on managing their chronic kidney disease. So instead we ask the patient a series of multiple-choice questions, score it according to a research-validated method, and then use clinical expertise and guidelines to decide if the patient really is capable of refusing medical advice. SEM may have been used by the team of researchers who validated that survey, which is now used by the psychiatrist in the emergency room…in a utopian, ideal world at least.

In the real world, clinical data is often not continuous, and SEM doesn’t have a great fit. Genetics research may already have a few good applications. But many fields do not. Hopefully, the potential uses of SEM are broadening as we get better at measuring every possible variable with greater and greater precision.

Imagine a world where we know exactly when the last dose of insulin was given for an entire population of diabetics. Then we could use the variable “minutes since last insulin dose was given”, instead of saying “a couple of hours ago” or “yesterday”. This information may be a useful intermediate step in an SEM model that attempts to characterize how processed food and alcohol exposure affects hypoglycemic seizures in impoverished diabetic patients.

Better yet, imagine if the researchers knew exactly what these patients ate, how much of it, and how often? Creepy to many of us, but this sort of thing would get a dietary epidemiologist more riled up than Guy Fieri at that one pizza place in that one city.

Image result for Guy Fieri excited eating gif

Thank you for finishing my graduate student-level explanation of SEM. I find it’s easy to sit in class listening to SEM lecture after SEM lecture and still not be able to explain what I was just told. Hopefully you can now decide whether it’s worth pursuing further.

Below are several superior and much more thorough explanations of SEM.


Watch the Professionals Explain SEM

If you want the most accessible Structural Equation Modeling lectures, then check out Structural Equation Modelling (SEM): What it is and what it isn’t by Patrick Sturgis from the National Center for Research Methods. I supplemented my course work with this mini-series, and it was a great addition. Maybe the British accent is the right proper way to hear someone explain complex statistical methods.

A technical salesman explains how to use SEM in Stata, which I found was the most useful way to conduct SEM in my class. Unfortunately, Stata costs more than my travel grant award, so it might not be useful to those without institutional access.

Fortunately, there’s a free R package for SEM called lavaan, which has a ton of documentation for those dedicated few to peruse through.

Statistics Solutions has a 1-hour lecture on SEM, which might be great for a deep dive of SEM.

Good luck.

MOOC: Learning From Data – Machine Learning

“This is an introductory course in machine learning (ML) that covers the basic theory, algorithms, and applications. ML is a key technology in Big Data, and in many financial, medical, commercial, and scientific applications. It enables computational systems to adaptively improve their performance with experience accumulated from the observed data. ML has become one of the hottest fields of study today, taken up by undergraduate and graduate students from 15 different majors at Caltech. This course balances theory and practice, and covers the mathematical as well as the heuristic aspects. The lectures below follow each other in a story-like fashion:

  • What is learning?
  • Can a machine learn?
  • How to do it?
  • How to do it well?
  • Take-home lessons.

“The 18 lectures are about 60 minutes each plus Q&A.”


Source

Caltech: Learning From Data Machine Learning Course by Yaser S. Abu-Mostafa

Textbook: Learning From Data by Yaser S. Abu-Mostafa and Malik Magdon-Ismail

 

What’s the Point? Centering Independent Variable on Mean in Regression Models

Centering continuous independent variables was one of the earliest lessons in my linear regression class. I was recently asked to explain, “what’s the point?” of going through the trouble of centering? I was at a loss, and realized I had been assuming the answer was obvious when it was not.

After a quick google, this article explained the answer well. In short, centering is useful when interpreting the intercept is important. Here example of age of development of language in infants. Her original article has been copied below.

Should You Always Center a Predictor on the Mean?

by Karen Grace-Martin

Centering predictor variables is one of those simple but extremely useful practices that is easily overlooked.

It’s almost too simple.

Centering simply means subtracting a constant from every value of a variable. What it does is redefine the 0 point for that predictor to be whatever value you subtracted. It shifts the scale over, but retains the units.

The effect is that the slope between that predictor and the response variable doesn’t change at all. But the interpretation of the intercept does.

The intercept is just the mean of the response when all predictors = 0. So when 0 is out of the range of data, that value is meaningless. But when you center X so that a value within the dataset becomes 0, the intercept becomes the mean of Y at the value you centered on.

What’s the point? Who cares about interpreting the intercept?

It’s true. In many models, you’re not really interested in the intercept. In those models, there isn’t really a point, so don’t worry about it.

But, and there’s always a but, in many models interpreting the intercept becomes really, really important. So whether and where you center becomes important too.

A few examples include models with a dummy-coded predictor, models with a polynomial (curvature) term, and random slope models.

Let’s look more closely at one of these examples.

In models with a dummy-coded predictor, the intercept is the mean of Y for the reference category—the category numbered 0. If there’s also a continuous predictor in the model, X2, that intercept is the mean of Y for the reference category only when X2=0.

If 0 is a meaningful value for X2 and within the data set, then there’s no reason to center. But if neither is true, centering will help you interpret the intercept.

For example, let’s say you’re doing a study on language development in infants. X1, the dummy-coded categorical predictor, is whether the child is bilingual (X1=1) or monolingual (X1=0). X2 is the age in months when the child spoke their first word, and Y is the number of words in their vocabulary for their primary language at 24 months.

If we don’t center X2, the intercept in this model will be the mean number of words in the vocabulary of monolingual children who uttered their first word at birth (X2=0).

And since infants never speak at birth, it’s meaningless.

A better approach is to center age at some value that is actually in the range of the data. One option, often a good one, is to use the mean age of first spoken word of all children in the data set.

This would make the intercept the mean number of words in the vocabulary of monolingual children for those children who uttered their first word at the mean age that all children uttered their first word.

One problem is that the mean age at which infants utter their first word may differ from one sample to another. This means you’re not always evaluating that mean that the exact same age. It’s not comparable across samples.

So another option is to choose a meaningful value of age that is within the values in the data set. One example may be at 12 months.

Under this option the interpretation of the intercept is the mean number of words in the vocabulary of monolingual children for those children who uttered their first word at 12 months.

The exact value you center on doesn’t matter as long it’s meaningful, holds the same meaning across samples, and within the range of data. You may find that choosing the lowest value or the highest value of age is the best option. It’s up to you to decide the age at which it’s most meaningful to interpret the intercept.


Source

The Analysis Factor: Should You Always Center a Predictor on the Mean? by Karen Grace-Martin

Covariance vs. Correlation

Covariance and correlation are two statistical concepts that are closely related, both conceptually and by their name. The excerpts below are from a concise article that differentiates them.

Difference Between Covariance and Correlation

“Correlation is a special case of covariance which can be obtained when the data is standardised. Now, when it comes to making a choice, which is a better measure of the relationship between two variables, correlation is preferred over covariance, because it remains unaffected by the change in location and scale, and can also be used to make a comparison between two pairs of variables.”

Key Differences Between Covariance and Correlation

“The following points are noteworthy so far as the difference between covariance and correlation is concerned:

  1. “A measure used to indicate the extent to which two random variables change in tandem is known as covariance. A measure used to represent how strongly two random variables are related known as correlation.
  2. “Covariance is nothing but a measure of correlation. On the contrary, correlation refers to the scaled form of covariance.
  3. “The value of correlation takes place between -1 and +1. Conversely, the value of covariance lies between -∞ and +∞.
  4. “Covariance is affected by the change in scale, i.e. if all the value of one variable is multiplied by a constant and all the value of another variable are multiplied, by a similar or different constant, then the covariance is changed. As against this, correlation is not influenced by the change in scale.
  5. “Correlation is dimensionless, i.e. it is a unit-free measure of the relationship between variables. Unlike covariance, where the value is obtained by the product of the units of the two variables.”

Source

Difference Between Covariance and Correlation by Surbhi S

Invalidating Bloodletting with Science

Blood on the Tracks – Podcast Episode 38

Learn about a piece of epidemiological history: one of the earliest examples of population-level clinical studies influencing medical practice. This podcast tells the story of how French physician Pierre Charles Alexandre Louis studied a group of patients and ended up discovering quantitative evidence on the detriment of bloodletting. Learning the history helps place these tools in a broader context, which isn’t crucial, but interesting nonetheless.

Listen to the Podcast here

The first population study in history was born out of a dramatic debate involving leeches, “medical vampires,” professional rivalries, murder accusations, and, of course, bloodletting, all in the backdrop of the French Revolution. The second of a multipart series on the development of population medicine, this episode contextualizes Pierre Louis’ “numerical method,” his famous trial on bloodletting, and the birth of a new way for doctors to “know”.


Source

Bedside Rounds: Episode 38: Blood on the Tracks (PopMed #2)

Using It or Losing It? The Case for Data Scientists Inside Health Care

“As much as 30% of the entire world’s stored data is generated in the health care industry. A single patient typically generates close to 80 megabytes each year in imaging and electronic medical record (EMR) data. This trove of data has obvious clinical, financial, and operational value for the health care industry, and the new value pathways that such data could enable have been estimated by McKinsey to be worth more than $300 billion annually in reduced costs alone…Read More


Source

NEJM Catalyst: Using It or Losing It? The Case for Data Scientists Inside Health Care by Marco D. Huesch, MBBS, PhD & Timothy J. Mosher, MD

Powered by WordPress.com.

Up ↑