Keyboard Shortcuts?f

×
  • Next step
  • Previous step
  • Skip this slide
  • Previous slide
  • mShow slide thumbnails
  • nShow notes
  • hShow handout latex source
  • NShow talk notes latex source

Click here and press the right key for the next slide.

(This may not work on mobile or ipad. You can try using chrome or firefox, but even that may fail. Sorry.)

also ...

Press the left key to go backwards (or swipe right)

Press n to toggle whether notes are shown (or add '?notes' to the url before the #)

Press m or double tap to slide thumbnails (menu)

Press ? at any time to show the keyboard shortcuts

 

Operationalising Moral Foundations Theory

 

Operationalising Moral Foundations Theory

[email protected]

I’m still on this one

2

‘Moral-foundations researchers have investigated the similarities and differences in morality among individuals across cultures (Haidt & Josephs, 2004). These researchers have found evidence for five fundamental domains of human morality’

(Feinberg & Willer, 2013, p. 1)

Feinberg & Willer, 2013 p. 1

Very important for philosophers. They aren’t allowed just to make it up.
Big point: we have a method for identifying moral abilities that doesn’t depend on prior assumptions about what counts as ethical.

There may be cultural variations on what is, and what isn’t, an ethical issue.

So we can’t assume in advance that we know for sure what is ethical and what isn’t.

But if we don’t know what is ethical and what isn’t, how can we study cultural variations in it?

Moral Foundations Questionnaire

When you decide whether something is right or wrong, to what extent are the following considerations relevant to your thinking?

... whether or not someone was harmed?

... whether or not someone suffered emotionally

... whether or not someone did something disgusting

... whether or not someone did something unnatural or degrading

Graham et al, 2009

Basic requirements

- internal validity (roughly, do answers to each category of questions appear to reflect a single and distinct underlying tendency)

‘The scale is internally consistent (both within and between two question formats)’ (between = relevance questions vs judgements)

Graham et al, 2011 figure 3 (part)

Confirmatory factor analysis

observed variables : answers to individual MFQ questions

latent factors : the five moral primitives

clear nontechnical intro to confirmatory factor analysis (& more): Gregorich, 2006
For a clear, nontechnical intro to confirmatory factor analysis (and factorial invariance concepts, which we’ll get to later), see Gregorich (2006, pp. S78–S83) and Lee (2018). (Note that you do not need to understand this, but doing so will help you to understand the evidence supporting, and threatening, applications of Moral Foundations Theory to cross-cultural comparison.)

Graham et al, 2011 figure 3 (part)

‘The five-factor model fit the data better (weighing both fit and parsimony) than competing models, and this five-factor representation provided a good fit for participants in 11 different world areas.’

(Graham et al., 2011, p. 380)

Graham et al, 2011 p. 380

‘[...] empirical support for the MFQ for the first time in a predominantly Muslim country. [...] the 5-factor model, although somewhat below the standard criteria of fitness, provided the best fit among the alternatives.

[...] one can conclude that, at least in non-English speaking countries, the MFQ is not the ideal device to measure the theoretical framework of the MFT’ (Yilmaz, Harma, Bahçekapili, & Cesur, 2016, p. 153).

Yilmaz et al, 2016 p. 153

Basic requirements

- internal validity (roughly, do answers to the three questions appear to reflect a single underlying tendency)

- Test–retest reliability (are you as an individual likely to give the same answers at widely-spaced intervals? Yes (37 days)! Graham et al., 2011, p. 371)

‘We gave the MFQ to 123 college students (mean age 20.1 years; 69.9% female) from the University of Southern California. After an average interval of 37.4 days (range 28 – 43 days), participants completed the MFQ a second time’ (Graham et al., 2011, p. 371).

- external validity (relation to other scales)

multiple external scales for each foundation.

‘each foundation was the strongest predictor for its own conceptually related group of external scales’ (Graham et al., 2011, p. 373)

Do you see how the puzzle has been solved by MFT?

There may be cultural variations on what is, and what isn’t, an ethical issue.

So we can’t assume in advance that we know for sure what is ethical and what isn’t.

But if we don’t know what is ethical and what isn’t, how can we study cultural variations in it?

Does MFT answer this question?

fieldwork -> hypothetical model -> CFA -> revise model -> ...

We just looked at three basic requirements but actually there is one more ...

Basic requirements

- internal validity (roughly, do answers to the three questions appear to reflect a single underlying tendency)

- Test–retest reliability (are you as an individual likely to give the same answers at widely-spaced intervals? Yes (37 days)! Graham et al., 2011, p. 371)

‘We gave the MFQ to 123 college students (mean age 20.1 years; 69.9% female) from the University of Southern California. After an average interval of 37.4 days (range 28 – 43 days), participants completed the MFQ a second time’ (Graham et al., 2011, p. 371).

- external validity (relation to other scales)

‘each foundation was the strongest predictor for its own conceptually related group of external scales’ (Graham et al., 2011, p. 373)

- measurement invariance (for cross-cultural comparison)

see (Lee, 2018) on measurement invariance

This is based on comparing means: it’s what you need (scalar) measurement invariance for

Graham et al, 2009 figure 3

Does this reflect
merely differences in how people interpret the questions
or substantial differences in their moral foundations?

‘A finding of measurement invariance would provide more confidence that use of the MFQ across cultures can shed light on meaningful differences between cultures rather than merely reflecting the measurement properties of the MFQ’

(Iurino & Saucier, 2020, p. 2)
Compare Lee (2018): ‘Ascertaining scalar invariance allows you to substantiate multi-group comparisons of factor means (e.g., t-tests or ANOVA), and you can be confident that any statistically significant differences in group means are not due to differences in scale properties.’

Iurino & Saucier, 2018 p. 2

I’m not mentioning Iurino & Saucier, 2018’s study because they note some limits of their sample and methodology.

metric invariance - you can compare variances

scalar invariance - you can compare means (eg ‘conservatives put more weight on purity than liberals’)

Does Moral Foundations Theory provide a model that is invariant?

Davis et al. (2016): metric but not scalar invariance for Black people vs White people

(Davis et al., 2016) found metric but not scalar invariance

Atari, Graham, & Dehghani (2020): scalar non-invariance for US vs Iranian participants

Doğruyol, Alper, & Yilmaz (2019): metric non-invariance for WEIRD/non-WEIRD samples

‘the five-factor model of MFQ revealed a good fit to the data on both WEIRD and non-WEIRD samples. Besides, the five-factor model yielded a better fit to the data as compared to the two-factor model of MFQ. Measurement invariance test across samples validated factor structure for the five-factor model, yet a comparison of samples provided metric non-invariance implying that item loadings are different across groups [...] although the same statements tap into the same moral foundations in each case, the strength of the link between the statements and the foundations were different in WEIRD and non-WEIRD cultures’ (Doğruyol et al., 2019).

‘although the same statements tap into the same moral foundations in each case, the strength of the link between the statements and the foundations were different in WEIRD and non-WEIRD cultures’ (Doğruyol et al., 2019).

‘there were problems with scalar invariance, which suggests that researchers may need to carefully consider whether this scale is working similarly across groups before conducting mean comparisons’ (Davis et al., 2016, p. e27).

NB: It is an abuse of the tool to compare WEIRD and non-WEIRD since we expect them to be heterogeneous (nor was that tested). Might instead mention another study? Or use this as a good example of how you have to be super cautious in using published research.

Basic requirements

- internal validity (roughly, do answers to the three questions appear to reflect a single underlying tendency)

- Test–retest reliability (are you as an individual likely to give the same answers at widely-spaced intervals? Yes (37 days)! Graham et al., 2011, p. 371)

‘We gave the MFQ to 123 college students (mean age 20.1 years; 69.9% female) from the University of Southern California. After an average interval of 37.4 days (range 28 – 43 days), participants completed the MFQ a second time’ (Graham et al., 2011, p. 371).

- external validity (relation to other scales)

multiple external scales for each foundation.

‘each foundation was the strongest predictor for its own conceptually related group of external scales’ (Graham et al., 2011, p. 373)

- measurement invariance (for cross-cultural comparison)

see (Lee, 2018) on measurement invariance

2

‘Moral-foundations researchers have investigated the similarities and differences in morality among individuals across cultures (Haidt & Josephs, 2004). These researchers have found evidence for five fundamental domains of human morality’

(Feinberg & Willer, 2013, p. 1)

Feinberg & Willer, 2013 p. 1