What the Heck is Structural Equation Modeling?

I took an entire semester on structural equation modeling (SEM), and I still only have a rather fuzzy idea of what it is. In short, this is the best technical definition I’ve arrived at.

A Brief Technical Definition of SEM

Structural equation modeling is a research method that combines factor analysis and path analysis to mathematically describe complex conceptual mechanisms in a medium size dataset.

Factor analysis measures latent traits indirectly from measureable variables by either exploring new models, or confirming hypothesized ones.

Path analysis is linear regression repeated where one outcome, or dependent variable, becomes the exposure, or independent variable, for the subsequent regression.

That may make some semblance of sense if you’re familiar with statistics and psychometrics, the field of using surveys to measure educational and psychological abilities. So what is SEM to normal, decent, God-fearing people?

A Conceptual Definition of SEM

Structural equation modeling is a loosly defined quantitative method of building and then testing a mechanistic model. If a researcher understands a concept, they can then propose a model, where one step affects another, which may then inhibit a third step, and so on. Once a model of sequential steps is described, it can be tested against similar models by various statistical methods. The inferior mechanistic model is discarded in an iteraive process as the researcher strives ever nearer to the truth.

Honestly I don’t think the above description is elegant enough to explain SEM to my grandmother. But it should be enough to communicate with other students and researchers.


What do SEM Models Look Like?

Okay, don’t lose me here. Just relax and kind of let your eyes out of focus. Don’t look at the details.

Image result for structural equation modeling

Whew, look away!

This is a google image of an SEM diagram. I’ve no idea what it’s trying to explain. And that’s kind of the fun of SEM. You look really smart, but you’ll probably have a rough time explaining your theoretical mechanism to anyone.

These are mechanisms, and they look like spiderwebs of circles and rectangles with arrows going everywhere. There’s a technical language to the SEM model diagrams that you can learn from some of the links at the bottom. But for now, just know how to spot an SEM model diagram in the wild.


An Application of SEM: Survey Data

SEM is a method that has a couple really great applications and many terrible ones. It’s great if you have a data set that is on the order of magnitude of 100 to 1,000 observations. The variables in the model should also be continuous, as dichotomous and integer variables lead to mathematical problems. For this reason, SEM is more common in psychology and the humanities rather than medicine or epidemiology…at least for now. In particular, SEM tends to be applied to survey data.

Survey data takes a bunch of items, adds them up in some fashion, and produces a continuous score. We all know this from elementary school, where if you got 79% on a quiz and your friend got 80%, then she was smarter than you and that was that.

While this might serve some purpose in school, it’s not great when assessing whether a psychiatric patient is competent to make decisions with their physician on managing their chronic kidney disease. So instead we ask the patient a series of multiple-choice questions, score it according to a research-validated method, and then use clinical expertise and guidelines to decide if the patient really is capable of refusing medical advice. SEM may have been used by the team of researchers who validated that survey, which is now used by the psychiatrist in the emergency room…in a utopian, ideal world at least.

In the real world, clinical data is often not continuous, and SEM doesn’t have a great fit. Genetics research may already have a few good applications. But many fields do not. Hopefully, the potential uses of SEM are broadening as we get better at measuring every possible variable with greater and greater precision.

Imagine a world where we know exactly when the last dose of insulin was given for an entire population of diabetics. Then we could use the variable “minutes since last insulin dose was given”, instead of saying “a couple of hours ago” or “yesterday”. This information may be a useful intermediate step in an SEM model that attempts to characterize how processed food and alcohol exposure affects hypoglycemic seizures in impoverished diabetic patients.

Better yet, imagine if the researchers knew exactly what these patients ate, how much of it, and how often? Creepy to many of us, but this sort of thing would get a dietary epidemiologist more riled up than Guy Fieri at that one pizza place in that one city.

Image result for Guy Fieri excited eating gif

Thank you for finishing my graduate student-level explanation of SEM. I find it’s easy to sit in class listening to SEM lecture after SEM lecture and still not be able to explain what I was just told. Hopefully you can now decide whether it’s worth pursuing further.

Below are several superior and much more thorough explanations of SEM.


Watch the Professionals Explain SEM

If you want the most accessible Structural Equation Modeling lectures, then check out Structural Equation Modelling (SEM): What it is and what it isn’t by Patrick Sturgis from the National Center for Research Methods. I supplemented my course work with this mini-series, and it was a great addition. Maybe the British accent is the right proper way to hear someone explain complex statistical methods.

A technical salesman explains how to use SEM in Stata, which I found was the most useful way to conduct SEM in my class. Unfortunately, Stata costs more than my travel grant award, so it might not be useful to those without institutional access.

Fortunately, there’s a free R package for SEM called lavaan, which has a ton of documentation for those dedicated few to peruse through.

Statistics Solutions has a 1-hour lecture on SEM, which might be great for a deep dive of SEM.

Good luck.

The Science of Scientific Writing

“If the reader is to grasp what the writer means, the writer must understand what the reader needs.


“Science is often hard to read. Most people assume that its difficulties are born out of necessity, out of the extreme complexity of scientific concepts, data and analysis. We argue here that complexity of thought need not lead to impenetrability of expression; we demonstrate a number of rhetorical principles that can produce clarity in communication without oversimplifying scientific issues. The results are substantive, not merely cosmetic: Improving the quality of writing actually improves the quality of thought.”


Source

American Scientist: The Science of Scientific Writing by George GopenJudith Swan

Data Fraud in Clinical Trials

“Highly publicized cases of fabrication or falsification of data in clinical trials have occurred in recent years and it is likely that there are additional undetected or unreported cases. We review the available evidence on the incidence of data fraud in clinical trials, describe several prominent cases, present information on motivation and contributing factors and discuss cost-effective ways of early detection of data fraud as part of routine central statistical monitoring of data quality. Adoption of these clinical trial monitoring procedures can identify potential data fraud not detected by conventional on-site monitoring and can improve overall data quality.”


Source

NCBI: Data fraud in clinical trials

MOOC: Learning From Data – Machine Learning

“This is an introductory course in machine learning (ML) that covers the basic theory, algorithms, and applications. ML is a key technology in Big Data, and in many financial, medical, commercial, and scientific applications. It enables computational systems to adaptively improve their performance with experience accumulated from the observed data. ML has become one of the hottest fields of study today, taken up by undergraduate and graduate students from 15 different majors at Caltech. This course balances theory and practice, and covers the mathematical as well as the heuristic aspects. The lectures below follow each other in a story-like fashion:

  • What is learning?
  • Can a machine learn?
  • How to do it?
  • How to do it well?
  • Take-home lessons.

“The 18 lectures are about 60 minutes each plus Q&A.”


Source

Caltech: Learning From Data Machine Learning Course by Yaser S. Abu-Mostafa

Textbook: Learning From Data by Yaser S. Abu-Mostafa and Malik Magdon-Ismail

 

What’s the Point? Centering Independent Variable on Mean in Regression Models

Centering continuous independent variables was one of the earliest lessons in my linear regression class. I was recently asked to explain, “what’s the point?” of going through the trouble of centering? I was at a loss, and realized I had been assuming the answer was obvious when it was not.

After a quick google, this article explained the answer well. In short, centering is useful when interpreting the intercept is important. Here example of age of development of language in infants. Her original article has been copied below.

Should You Always Center a Predictor on the Mean?

by Karen Grace-Martin

Centering predictor variables is one of those simple but extremely useful practices that is easily overlooked.

It’s almost too simple.

Centering simply means subtracting a constant from every value of a variable. What it does is redefine the 0 point for that predictor to be whatever value you subtracted. It shifts the scale over, but retains the units.

The effect is that the slope between that predictor and the response variable doesn’t change at all. But the interpretation of the intercept does.

The intercept is just the mean of the response when all predictors = 0. So when 0 is out of the range of data, that value is meaningless. But when you center X so that a value within the dataset becomes 0, the intercept becomes the mean of Y at the value you centered on.

What’s the point? Who cares about interpreting the intercept?

It’s true. In many models, you’re not really interested in the intercept. In those models, there isn’t really a point, so don’t worry about it.

But, and there’s always a but, in many models interpreting the intercept becomes really, really important. So whether and where you center becomes important too.

A few examples include models with a dummy-coded predictor, models with a polynomial (curvature) term, and random slope models.

Let’s look more closely at one of these examples.

In models with a dummy-coded predictor, the intercept is the mean of Y for the reference category—the category numbered 0. If there’s also a continuous predictor in the model, X2, that intercept is the mean of Y for the reference category only when X2=0.

If 0 is a meaningful value for X2 and within the data set, then there’s no reason to center. But if neither is true, centering will help you interpret the intercept.

For example, let’s say you’re doing a study on language development in infants. X1, the dummy-coded categorical predictor, is whether the child is bilingual (X1=1) or monolingual (X1=0). X2 is the age in months when the child spoke their first word, and Y is the number of words in their vocabulary for their primary language at 24 months.

If we don’t center X2, the intercept in this model will be the mean number of words in the vocabulary of monolingual children who uttered their first word at birth (X2=0).

And since infants never speak at birth, it’s meaningless.

A better approach is to center age at some value that is actually in the range of the data. One option, often a good one, is to use the mean age of first spoken word of all children in the data set.

This would make the intercept the mean number of words in the vocabulary of monolingual children for those children who uttered their first word at the mean age that all children uttered their first word.

One problem is that the mean age at which infants utter their first word may differ from one sample to another. This means you’re not always evaluating that mean that the exact same age. It’s not comparable across samples.

So another option is to choose a meaningful value of age that is within the values in the data set. One example may be at 12 months.

Under this option the interpretation of the intercept is the mean number of words in the vocabulary of monolingual children for those children who uttered their first word at 12 months.

The exact value you center on doesn’t matter as long it’s meaningful, holds the same meaning across samples, and within the range of data. You may find that choosing the lowest value or the highest value of age is the best option. It’s up to you to decide the age at which it’s most meaningful to interpret the intercept.


Source

The Analysis Factor: Should You Always Center a Predictor on the Mean? by Karen Grace-Martin

100 years of the FDA

“The 1906 pure food and drug act was set up to protect US citizens from unregulated and potentially harmful products. Implementing the regulation has presented the US Food and Drug Administration with many high-profile challenges, as Fiona Case finds out.”

Read more


Source

Chemistry World: 100 years of the FDA (2006) by Fiona Case

Powered by WordPress.com.

Up ↑