Today’s tutorial involves picking up a useful new weapon for your data analytic arsenal; one that I’ve used quite a bit over the past year of my graduate training. We’re going to look at a novel way of estimating & graphing interactions in the context of multiple regression (one that even extends to structural equation models), using my increasingly go-to program – *Mplus.* Note that the tips below have been tested in *Mplus* versions 6 and 7 effectively. Using these procedures in any earlier version is a total crap shoot — meaning I haven’t verified whether or not they work in version 5 or older — so bear that in mind.

**A bit of background:
**(or click here to just skip ahead to the

*Mplus*stuff)

Quite often in correlational and quasi-experimental research, we’re interested in determining the extent to which the effect that some predictor variable (X) has on some outcome (Y) is systematically different across levels of some third, influential variable (M, which in this case is a moderator).

Let’s say that in this overly simplified example, we want to know more about how the stress of experiencing negative life events predicts a person’s health symptoms over a 3 month period. Conceptually, this would be a straightforward X –> Y relationship, where:

X = Subjective stress/severity of negative life events experienced

Y = Severity of health symptoms reported 3 months later

Suppose we hypothesize that married people are less strongly affected by life stress than single people. In this simplest of cases, the moderator variable is just your ordinary, run-of-the-mill categorical (or nominal) variable — relationship status (single vs. married). If you wanted to use multiple regression test the interaction I described above, and you have a categorical moderator, you’d effectively be running an incredibly simplified random coefficients (or random slopes) model, wherein you’re essentially seeing whether the X –> Y relationship is different depending on which of the two moderator variable groups a person is in, while adjusting for the association between Y and the moderator itself (in this case it could be quite important, as marital status probably has its own effect on health symptoms too, regardless of stressful experiences).

So a logical question here is this: *what happens if you’re using a moderator that isn’t just a categorical variable?* Suppose instead of relationship status, we wanted to look at this X–>Y relationship in terms of a person’s levels of dispositional negativity (often called neuroticism). We might hypothesize that people who score high on a negativity measure might be even more strongly affected by life stressors than those who score low on the negativity measure. The problem is, there are no naturally existing categories of negativity to neatly compare here. *So what do we do?*

**Option 1: Split the data according to your negativity variable into “high vs. low.”** Dichotomizing your data (i.e., creating two mutually-exclusive categories) like this often takes the form of the well-known “median split” procedure. Basically, you’d find the median of your negativity scores, and anyone who scores above it is categorized as “high negativity” while those below it are categorized as “low negativity.” This was once a common practice (and some folks still do this on occasion), but statistically speaking, this is a pretty terrible idea. It’s well known that artificially creating categories in this way will compromise the power of your statistical analysis (see my post on median-splits for an explanation of precisely how & why this happens). Aside from the power issue, it’s conceptually indefensible — Where do you draw the line between high and low, and how do you make this arbitrary decision? Does it really make sense to distinguish conceptually between people who are just barely on opposite sides of the split? The short answer is, probably not. So scratch that. What else can we do?

**Option 2: Don’t split the data. Instead, run your regression with the continuous moderator, and plot the interaction using outcome values for those who are 1 standard deviation above and below the mean for your moderator.** This is a very commonplace option. It’s certainly more defensible than option 1 above, and easier to interpret conceptually. However, there are still issues with this approach as well.

*Issue #1: *The standard deviations of your moderator are entirely sample-dependent. Note that this isn’t a major * major *problem — frankly these sample standard deviations are usually all we’ve got to work with (after all, that’s the entire point of estimating population parameters — we just hope that our data are well-behaved enough to allow us to reasonably approximate the population values we want). In fact, the J-N procedure described below will piggyback slightly on this standard deviation distinction method. Still, it’s worth keeping in mind that plotting values of the outcome at these

*-1SD/+1SD*moderator values may not represent a meaningful distinction in the real world if for any reason your sample is weird (e.g., low variability or influential outliers). Though bear in mind, this weirdness itself can in fact influence whether or not you even find a statistically significant interaction term to begin with. Which brings us to…

*Issue #2:* The relationship between X and Y might not have a meaningful interpretation within the range of values of the moderator that you’ve specified or observed, but a meaningful interaction might still in fact exist in the population. This too can be sample-specific, and reflects an issue with restriction of range. It may be the case that there are population values of the moderator that are simply not well represented in your sample, and it is only when these population values are reached that you see systematic differences in the relationship between X & Y. In fact, this is precisely what the J-N procedure can show you (hint hint – we’ll see this happen later on). Related to this point, there’s…

*Issue #3:* Plotting a -1SD/+1SD interaction doesn’t provide a high-resolution picture of precisely “where” a person’s level of negativity really makes a difference in determining whether stressful life events impact health symptoms. In a sense, choosing the -1SD/+1SD plot points is arbitrary (we do it because it’s fairly easy to comprehend, and let’s face it, others who came before us have been doing it too). But why not plot .6 standard deviations around the mean of negativity? Why not 1.2? Why not .77? Why not 2.88431? You get the idea — Like I said, it’s arbitrary. It would be *better* if we could say whether there is a specific point along the continuum of negativity at which the relationship between X & Y is significantly (statistically, I mean) different from zero, and *then* go on to actually plot that continuum in a meaningful fashion so that we can actually see exactly how that X–>Y relationship is constantly changing across continuous levels of your negativity variable. In other words, what we really want to see are *regions of significance.*

Turns out, there is a way to do just that using *Mplus*.

**Plotting continuous x continuous interactions by utilizing the Johnson-Neyman procedure.**

To Illustrate the uses of the Johnson-Neyman (or *J-N*) procedure, I went ahead and simulated some data using SPSS (as an aside, I plan to write a tutorial on data simulation in the near future, so stay tuned!). With the J-N procedure — which piggybacks on ooooooold methods of examining interactions, developed by Johnson & Neyman (1936) — you can see the precise regions of the continuum of negativity values for which the regression slope of stress (*X–>Y*) is estimated to be significantly different from zero.

I should provide one caveat — this procedure is not straightforward. It’s not excessively difficult either, but it does require a few manual steps, and a rudimentary understanding of the algebra involved in multiple regression with interaction terms. Also, **you’ll need to write and run your program in two parts**.

**PART ONE: Getting your sample estimates. **

**Step 1:** Make your life easier by centering your variables *before* you create your *Mplus* data set. I already did this in the data that I simulated for the example here, by creating standardized versions of all my variables.

You *could* center your variables as part of your program directly in *Mplus*, but I don’t recommend this. You’d need to use a pair of DEFINE statements – one to center your variables, and another to define a variable that consists of the cross-product of your moderator and your IV (in other words, an interaction term). However, *Mplus* has a tendency to process centering commands last in your DEFINE statements, regardless of the order, so know that you run a risk of calculating an interaction effect that is uncentered (and therefore has a wacky, and often unrealistic interpretation). Save yourself the trouble, and just have your centered variables ready to go from the jump.

**Step 2:** Set up your basic regression model. Recall that we’re dealing with the following model:

X = Subjective stress/severity of negative life events experienced

Y = Severity of health symptoms reported 3 months later

M = Dispositional negativity.

Now, I won’t go into the nuts and bolts of how to write *Mplus* program syntax (I assume you’re familiar with the basics, otherwise you’d have quit reading this article by now). So let’s say we’ve specified the program and model like this:

(*lines that start with !exclamation points! are just comments. Standard stuff in Mplus.*)

TITLE: Stress --> Health, Moderated by Negativity DATA: FILE IS StressHealth.dat; FORMAT IS FREE; !Variables are !ID = Participant ID number! !STRESSc - Severity of negative life event experienced (centered)! !HEALTHc = Severity of health symptoms reported 3 months later (centered)! !NEGATIVc = Dispositional negativity (centered)! !STRSxNEG = Interaction term -- severity of neg. life event BY negativity! VARIABLE: NAMES ARE ID STRESSc NEGATIVc HEALTHc STRSxNEG; USEVARIABLES ARE STRESSc NEGATIVc HEALTHc STRSxNEG; ANALYSIS: MODEL: HEALTHc ON STRESSc (b1) NEGATIVc (b2) STRSxNEG (b3); OUTPUT: samp;

There are a few things that I’ve done here. Let’s examine them bit by bit, starting with our MODEL command.

**The MODEL Command:** You’ll notice that each of the predictors in this multiple regression model is * parameterized *(I’m talking about the little code-names in parentheses above). That’s just a fancy way of saying that I’ve tagged each predictor with a label of sorts — in this case I’ve used the labels B1, B2, and B3 for stress, negativity, and the interaction term, respectively. Those labels are important because they become parameters that we can then use in subsequent parts of the program (and indeed I did, as we’ll see later).

**The OUTPUT Command:** All I’ve requested here are descriptive sample statistics (using the “samp” option). We’ll need these for Part Two below, because we’ll need to know what the sample mean and standard deviation of our moderator variable are in order to create a meaningful plot later. In this case, all of my simulated variables are standardized, so unsurprisingly, my descriptive statistics look like this (I’ve highlighted what we’ll need in **bold red** below):

SAMPLE STATISTICSMeansHEALTHC STRESSCNEGATIVCSTRSXNEG ________ ________ ________ ________ 1 0.000 0.0000.0000.056CovariancesHEALTHC STRESSCNEGATIVCSTRSXNEG ________ ________ ________ ________ HEALTHC 0.998 STRESSC -0.018 0.998NEGATIVC0.104 0.0560.998STRSXNEG 0.479 -0.118 -0.090 2.507

**The mean of negativity here is 0.** Easy peasy.

As for the standard deviation, we can figure that out easily enough with simple math. Above, I’ve highlighted the covariance between negativity and … well, negativity. Why? **Simple – the covariance between a variable and itself is just that variable’s variance. **And we know from our first stats class that the relationship between the variance and standard deviation is super-straightforward:

So in this case, the standard deviation of negativity is just the square root of .998, which gives us:

√.998 = .9989995 ≅ 1.

**The standard deviation of negativity is 1.** Unsurprising, as I said, because these variables are all standardized.

However, if you’re using unstandardized variables (and they had still damn well better be centered!), you can and should still do the math above with your moderator. Find the variance, get the square root, use that value in the next step. We’ll be using these mean and SD values in Part Two below. Let’s get to it.

**PART TWO: Johnson-Neyman Plots!**

Let’s revisit our *Mplus* program. Now that we’ve gone through part one and obtained our sample estimates, we have some numbers we can work with. The program below is almost exactly the same as before, but in order to conduct the J-N procedure, we have to add two new parts — a MODEL CONSTRAINT segment featuring a ** loop plot**, and a short & sweet PLOT command to go along with it (for ease, I’ve presented them in

**bold font**below). The extended program, which includes the J-N procedure looks like this:

TITLE: Stress --> Health, Moderated by Negativity DATA: FILE IS StressHealth.dat; FORMAT IS FREE; !Variables are !ID = Participant ID number! !STRESSc - Severity of negative life event experienced (centered)! !HEALTHc = Severity of health symptoms reported 3 months later (centered)! !NEGATIVc = Dispositional negativity (centered)! !STRSxNEG = Interaction term -- severity of neg. life event BY negativity! VARIABLE: NAMES ARE ID STRESSc NEGATIVc HEALTHc STRSxNEG; USEVARIABLES ARE STRESSc NEGATIVc HEALTHc STRSxNEG; ANALYSIS: MODEL: HEALTHc ON STRESSc (b1) NEGATIVc (b2) STRSxNEG (b3); !J-N Significance Regions: !Effect of Stress on Health Across Levels of Negativity !(all variables are Z scores)! !For Negativity -- Mean = 0, SD = 1MODEL CONSTRAINT:LOOP(NEGATIVc, -2, 2, .1);PLOT(Stress_B);Stress_B = b1 + b3*(NEGATIVc);PLOT: TYPE = PLOT2;OUTPUT: samp cinterval;

You might have also noticed that I added the option “cinterval” to my OUTPUT command at the very end. This option will give me confidence intervals for my effects. It’s just there because I like confidence intervals. They’re more useful than *p* values when dealing with inferential statistics (though that’s a discussion for another day).

For now let’s just examine the two new parts of importance, shall we?

** The MODEL CONSTRAINT Command:** This is the fun part, and also the most complex. Remember how I mentioned *parameterizing* my predictors during Part One above (that B1, B2, B3 stuff)? This was why. We need to use those parameters to generate J-N style plots. This part of the program is where we define the values used in the J-N plot, based on (1) the parameters we’ve specified, (2) the descriptive statistics we got during Part One above, and (3) our basic understanding of the mathematics behind an interaction in multiple regression. Let’s break it down.

*LOOP statement.* What the LOOP command does is instead of just displaying the differences in the magnitude of the stress effect in the typical +1SD/-1SD way (i.e., showing you what B1 is among those “high” versus “low” in negativity), it will test the magnitude of the effect against a null effect (B = 0) across an entire range of negativity values that you specify. It will display this test graphically, complete with confidence bands around the effect (awesome, ain’t it?). The basic structure of the loop statement is as follows:

LOOP ([MODERATOR VARIABLE for X axis], [Lowest value of X axis to plot], [Highest value of X axis to plot], [incremental change in values to display on the X scale]);

Here’s my LOOP statement again:

LOOP(NEGATIVc, -2, 2, .1);

So what my program above does is, it tells *Mplus* the following:

1) Use values of the moderator variable NEGATIVc (negativity) on the X axis.

2) The lowest negativity value to plot for the stress effect will be -2.

3) The highest negativity value to plot for the stress effect will be 2.

4) The negativity (X) axis will show increments of .1 (so -.1 — 0 — .1 –.2–.3, etc.).

Now as for those lowest and highest X axis (negativity) values — those are based on the descriptives we obtained in part one above. The standard deviation of negativity is 1, so I’m simply plotting across the range of -2SD to +2SD (this too, is admittedly kind of arbitrary. Told you we’d be piggybacking). Let’s check out the next line of the program.

*PLOT (Stress_B).* This is another parameterization step of sorts. I’m telling Mplus what I’ll actually want it to plot on the Y axis (which is my axis of interest). In this case I want to generate a plot of the value of the ** adjusted effect of stress on health** across all continuous values of dispositional negativity (which is why negativity is on the X axis — you see where I’m going with this now?). I’m calling it “Stress_B” because it’s going to be the value of

*B*(or Beta (

*β*), the regression coefficient [also the logo for this site. w00t]) for the predictor “STRESSc”, adjusted for the effect of negativity (which we’ll do in just a sec). You’re might be saying right now,

*“Hey, Fred, I get that, but you haven’t even defined what “Stress_B” is yet in your program. How can you tell the program to use it here?”*A fair question. The answer is,

*Mplus*can read this stuff out of order without an issue. Patience. I’ll define what

*Stress_B*is on the very next line of the program. Speak of the devil…

*Stress_B = B1 + B3*(NEGATIVc). *Remember when I made all that fuss about defining parameters earlier during the MODEL command? Now we’re finally using them. I defined the main effect of stress on health as parameter *B1* earlier, and the interaction term as parameter *B3*. Because I defined those parameters already, I can summon them to do my bidding in other parts of the program later (it’s like Pokemon for data analysis, really).

* A brief aside —* Frankly, you can call parameters whatever the hell you want. Instead of B1 and B3, I could have written (cucumber) and (triangle).

*Mplus*doesn’t care. You’d just need to use the same parameter names you defined subsequently –> in this case, your equation would awkwardly become:

*Stress_B = cucumber + triangle*(NEGATIVc)*.

Believe it or not, that stupid-looking line of syntax would work just fine in such a case. But I digress…

Do you *also* remember when I said you’ll need a rudimentary understanding of the algebra involved in computing interactions in multiple regression? Well here’s where it matters.

**A quick conceptual refresher: Understanding interaction effects in multiple regression.**

Here’s the slightly confusing way of explaining what we’re doing. If we want to know what the adjusted effect of stress on health looks like, we obviously need to account for the fact that that effect changes as an additive function of (1) the extent to which a person is dispositionally negative, times (2) the magnitude of the interactive effect of stress AND negativity on health. Confused yet? We’ll get there. Here’s what I just said, laid out slightly differently:

A somewhat simpler way to explain this is to say that the adjusted effect of stress on health consists of the main effect of stress on health (B1) plus whatever effect (B3) a person’s negativity has on how strongly stress predicts health outcomes for that person. An EVEN SIMPLER way of putting it would be to say that determining whether or not stress affects your health may be entirely dependent on how negative you are in general.

**Beating a dead horse: more regression concepts, in case you’re still lost.**

To illustrate further, here’s what the actual results from my regression analysis of my simulated data looked like:

MODEL RESULTS Two-Tailed Estimate S.E. Est./S.E. P-Value HEALTHC ON STRESSC -0.001 0.042 -0.034 0.972 NEGATIVC 0.122 0.042 2.885 0.004 STRSXNEG 0.196 0.027 7.303 0.000 Intercepts HEALTHC -0.011 0.042 -0.258 0.797 Residual Variances HEALTHC 0.891 0.056 15.813 0.000

It helps to clarify these effects conceptually by examining the actual numbers. It also helps to **remember that I simulated these data**, so I wouldn’t read into any of these findings for any substantive purpose – they ain’t real, people.

The main effect (B1) of stress on health here suggests that each time a person’s score on the stress variable increases by 1, their health symptom severity is predicted to change by -.001. Obviously this main effect is very, very, very, very, VERY near zero, so the *t* value associated with this effect is teeny-tiny (*t* = -.034), resulting in a *p* value that’s wildly high (p = .97) — however, **the B1 value here represents the magnitude of that effect among people who are exactly average in terms of their dispositional negativity **(i.e., negativity = 0 [because it’s a centered, standardized variable]). Mathematically, this would also mean that the interaction is equal to zero for these “average” negativity people (because mathematically, the interaction term = *stress x negativity (negativity equals 0 for these people)*, and any number multiplied by zero equals zero).

* How negativity changes things —* If we look at the interaction term (

*B3*= .196), we learn that the effect of stress on health is moderated by a person’s level of dispositional negativity (

*t*= 7.30,

*p*< .001). Conceptually, we’re saying that every time a person’s negativity score increases by 1, the adjusted effect of stress on health (or “Stress_B,” which we’re trying to plot) becomes more & more positive, increasing by .196 each time. In fact, this specifically predicts that when a person has a negativity score of 1, the effect of stress on health for that person should be around .195 or so:

Stress_B = -.001 + 1(.196) = -.001 + .196 = .195

Notice how that looks an awful lot like the equation I wrote into the actual program a bit earlier? It damn well should. If the equation above is correct, then our *J-N* plot should be consistent with that value (spoiler alert — it totally is). Also notice that even though we can say what that effect looks like when negativity equals 1, we still don’t know whether that effect tests as significantly different from zero. The *J-N* plot will tell us that too.

**The PLOT Command:**

After all that craziness & explanation involved in the MODEL CONSTRAINT command, this part will seem incredibly anticlimactic. Recall that all we added to the program here was the following:

PLOT: TYPE = PLOT2;

This just tells *Mplus* to use a more advanced plotting scheme to display what we specified in the MODEL CONSTRAINT portion. Type 1 plots include just the very basic stuff (like histograms, means, and scatterplots). That ain’t good enough. Type 2 plots include more advanced data visualization applications such as eigenvalues in EFA, survival curves, IRT plots, and Bayesian distribution plots. So yeah… Type 2 is what we want here.

Now, without further ado, let’s finally look at the damn J-N plot!

**The J-N plot itself [drum roll]:**

Here’s what all that fancy-schmancy statistical & programming wizardry gets us… BEHOLD!

Yeah, yeah, yeah… I know. It’s not the prettiest thing ever. *Mplus* still has some pretty haggard graphs (don’t even get me started on the default latent variable SEM graphs. Holy sh*t, are they clunky looking). You might have noticed that not only did it not label the X axis for me, it also kind of ignored my scaling request for intervals of .1–.2–.3, etc., and instead opted to display the X axis in intervals of 1 (it tends to do things like this if the curve overlaps too much with the center of the image). Psshh… whatever, *Mplus.*

Still, this plot tells us quite a bit. A hell of a lot more than your typical “High vs. Low” *+1SD/-1SD* interaction plot seen in many articles. Here are the key components of the *J-N* plot, as we specified earlier:

1) The X-axis depicts a continuous range of our moderator (negativity) values.

2) The Y-axis depicts a continuous range of values for the adjusted effect of stress on health (i.e., “Stress_B”).

3) The straight, red plot line represents values of the adjusted effect (“Stress_B”) that correspond to the full range of all continuous values of negativity that we requested (in this case, recall that we requested -2 to +2, and that for our standardized variable these are just standard deviation units).

4) The colored, curved lines above and below the red plot line represent 95% confidence bands around the adjusted effect of stress on health (so cool, right?… right?).

Components 3 and 4 above are probably the biggest benefit of using the *J-N* procedure. They’re also why this procedure is often referred to as a “regions of significance” procedure. When we look at the confidence bands around “Stress_B,” we can clearly see the values (or regions) of the negativity continuum for which the effect of stress on health becomes meaningful — they’re the areas where the confidence bands do not include the possibility of “Stress_B” being equal to zero. I modified the plot below to make this whole “regions of significance” thing a bit easier to understand:

So here’s what the J-N plot suggests, in a nutshell:

1) For those who are within roughly .5 standard deviations below or .5 standard deviations above the average level of dispositional negativity, stress severity doesn’t predict health outcomes one way or the other. This makes sense when you recall that the main effect of stress in our regression analysis was super small (because it was the effect for those “average” people whose negativity score was zero).

2) The more negative a person is, the more and more strongly stress predicts worse health (i.e., more symptom severity). Looking at the graph, it seems like the effect of stress on health symptoms is significant once a person scores around just under .5 standard deviations or higher on the negativity scale (that’d be the red region in the graph above).

3) The less negative a person is, the more meaningful the relationship between stress and health symptoms becomes (that’d be the blue region in the graph above). In fact, it suggests that for people who are a little less than .5 standard deviations below average on negativity, *Stress_B* is significant and ** negative** (i.e., experiencing more stress predicts having lower than average health symptom severity). Again, however, I have to stress this —

**I simulated these data.**Just made them up from thin air on a random Saturday morning so I could illustrate how to plot these interactions, so don’t read too far into this finding. Seriously.

Also, remember when I said this awhile ago:

*Conceptually, we’re saying that every time a person’s negativity score increases by 1, the adjusted effect of stress on health (or “Stress_B,” which we’re trying to plot) becomes more & more positive, increasing by .196. In fact, this specifically predicts that when a person has a negativity score of 1, the effect of stress on health for that person should be around .195 or so:*

Stress_B = -.001 + 1(.196) = -.001 + .196 = .195

*Notice how that looks an awful lot like the equation I wrote into the actual program a bit earlier? It damn well should. If the equation above is correct, then our J-N plot should be consistent with that value.*

Check out the modified graph below:

**Extensions and Conclusion.**

* Extensions.* So we see that by using the J-N procedure for plotting interactions, you can make much more specific statements about how & when a particular moderator might influence the relationship between a predictor and an outcome. What’s also cool is, you can extend the J-N procedure with similar syntax to actually test the simple slopes that you observe in the graph (simple slopes would correspond to the three “in a nutshell” points I just made in the previous paragraphs). To do so, you’d add a brand new MODEL CONSTRAINT command that looks almost identical to the old one, except you’d need to specify new parameter names for the simple slopes, and instead of stating the name of variable itself in your equation, you’d plug in a

*specific value*of that variable to test.

So for example, we saw from the J-N plot that when negativity is either ≤ -.5, or ≥ .5, the effect of stress on health is no longer equal to zero. Well we can test that assertion directly, by making a small change. So we take the “Stress_B” part of our old syntax (given below):

MODEL CONSTRAINT:LOOP(NEGATIVc, -2, 2, .1);PLOT(Stress_B);Stress_B = b1 + b3*(NEGATIVc);

And make a slight change by adding specific negativity values to test simple slopes, using the same b1 and b3 parameters from before. The new portion looks like this:

MODEL CONSTRAINT: NEW (SLP_LO SLP_HI); !SLP_LO = simple slope of stress for those at -.5 on negativity! !SLP HI = simple slope of stress for those at +.5 on negativity!SLP_LO = b1 + b3*(-.5); SLP_HI = b1 + b3*(.5);

I’ve added the statement “NEW ([parameter 1] [parameter 2])” here so *Mplus* knows that I’m about to define a couple of new parameters to test. In this case, the new parameters are my two simple slopes that correspond to the blue and red regions ( respectively) of my previous graph. The output of the analysis including this new test of the simple slopes looks like this (new stuff is highlighted in **bold red**):

MODEL RESULTS Two-Tailed Estimate S.E. Est./S.E. P-Value HEALTHC ON STRESSC -0.001 0.042 -0.034 0.972 NEGATIVC 0.122 0.042 2.885 0.004 STRSXNEG 0.196 0.027 7.303 0.000 Intercepts HEALTHC -0.011 0.042 -0.258 0.797 Residual Variances HEALTHC 0.891 0.056 15.813 0.000New/Additional ParametersSLP_LO -0.099 0.044 -2.277 0.023SLP_HI 0.096 0.045 2.121 0.034

Turns out that what we inferred from the graph was accurate. Both simple slopes tested significantly different from zero, as we can see in the output above (both *p* values < .05). What’s also cool is, *Mplus* will give you confidence intervals around these new simple slopes in later parts of the output (unless you removed that “cinterval” option I used earlier), so you can also comment on the precision of these estimated slopes in addition to their relevant *p* values.

Pretty neat stuff.

You know what’s even cooler? You can actually use these J-N style approaches with paths in latent variable structural equation modeling — even with *latent-X-latent* continuous interactions (mind = blown, right?). I’ll hold off on explaining that stuff for now, as that could be an article unto itself. Still, know that it’s a flexible approach.

* Conclusion.* If your regression analysis includes continuous interactions, definitely consider going the Johnson-Neyman route for graphing them. You’ll end up with a much richer picture of the nature of your moderation effects. People besides me seem to have noticed the potential of the J-N procedure too — while recently attending the 2015 SPSP conference, I saw a couple of talks where authors used this approach. Here’s hoping that it continues increasing in popularity.

Also, I do know that there are ways of doing similar things in other popular stat programs like *SPSS* and *SAS.* I haven’t learned those processes yet though. As it happens, I learned the J-N procedure in *Mplus,* and I’ve used it successfully in my own work. So that’s what you’ll get from me, folks. Feel free to sniff around the internet for more info on using J-N style procedures in other programs.

Have fun crunching those numbers!

**References**:

Johnson, P. O. & Neyman, J. (1936). Tests of certain linear hypotheses and their applications to some educational problems. *Statistical Research Memoirs, **1*, 57–93.

Pingback: Basics – Highway to the Danger Zone: Why median-splitting your continuous data can ruin your results. | Fred Clavel

Um, this is awesome. Thank you!

Thanks so much- This post has been very helpful as we probe interactions in Mplus. In the past, we have used to using the Preacher and Curran online tool. Because I was curious, I compared both methods and I found that the estimates for simple slopes were the same with both methods but that they differed in the significance associated with the simple slopes. For instance, Mplus gives different standard errors for the simple slopes than the Preacher and Curran online program. Do you know why this might be?

Hi Kristina,

Good question. I’d be inclined to chalk it up to a statistical quirk of the two different programs. I’ve never used the Preacher & Curran tool, but I’d be curious to see just how different the SE’s for the slopes are.

I do know that Mplus has this odd thing it does where it will generate t-test values and associated p values for standardized effects. It’s a weird move because the program is generating brand new, unique standard errors for these Betas, but I’m not sure why it actually does this. It can be problematic because usually the Betas are technically estimated with greater precision than the raw B weights, according to this specialized Mplus output. So it can seem as though the effects in your model have larger t values and smaller p values than they might have in reality. Most people I know report tests of significance based on raw regression estimates. I’ve not come across work where these new t test values uniquely estimated just for the Betas are reported. Perhaps this is a new thing and I just missed the boat, but it does strike me as odd.

Perhaps the difference between program outputs has something to do with this standardization? Other than that, your guess is as good as [or better than] mine.

Great explanations about the syntax! Thank you so much Fred!

Hello! This is a great resource! My code is a bit more complicated than yours, but similar nonetheless. However, I am not getting the New/Additional Parameters in my output, which is what I really need. Any thoughts? All of my other syntax aligns with yours. Thank you!

This is a very helpfull tool!

I was wondering if you could help me interpret my output (the plot I obtained through these steps). First of all, the plot does not contain a horizontal axis at 0. Also, although the mplus ouput shows that the variable on the y-axis is a significant predictor (also as indicated by the 95% CI in the output) of my independent variable, the 95% CI in the plot always seem to contain 0. The CI seems to get wider when the variable on the x-axis increases. Did I do something wrong? What could this mean?

Thank you so much for this, it was extremely helpful. You make this matter very comprehensible.