------------------------------------------------------------------------------------------------------------------------------------------------ name: log: C:\Users\Marta\Google Drive\Drive_lab_live\Policy\Writing_\Research methods_\GDocs\Measuring Non-Cognitive Skills.txt log type: text opened on: 18 Sep 2020, 10:39:42 . . ******************************************************************************** . /****** Section 1 : Confirmatory Factor Analysis ******/ . . // create local for the scale tested containing item list . local scale2 scale2_item1 scale2_item2 scale2_item3 scale2_item4 scale2_item5 scale2_item6 scale2_item7 scale2_item8 . local scale2_1 scale2_item1 scale2_item2 scale2_item3 scale2_item4 . local scale2_2 scale2_item5 scale2_item6 scale2_item7 scale2_item8 . . // estimate structural model . eststo clear . qui sem (`scale2' <- ONEFACTOR ), iter(100) // estimate one factor structural model . estat gof // check goodness of fit for the model ---------------------------------------------------------------------------- Fit statistic | Value Description ---------------------+------------------------------------------------------ Likelihood ratio | chi2_ms(20) | 24.213 model vs. saturated p > chi2 | 0.233 chi2_bs(28) | 36.295 baseline vs. saturated p > chi2 | 0.135 ---------------------------------------------------------------------------- . estadd scalar r(cfi) added scalar: e(cfi) = .49213379 . estadd scalar r(tli) added scalar: e(tli) = .28898731 . estadd scalar r(rmsea) added scalar: e(rmsea) = .10817771 Note: the one factor model is a poor fit of the data. This makes sense as the scale is intended to measure two separate components. Therefore, testing two factor model: . . eststo clear . qui sem (`scale2_1' <- TWOFACTOR1) (`scale2_2' <- TWOFACTOR2), iter(100) // estimate two factor structural model . estat gof ---------------------------------------------------------------------------- Fit statistic | Value Description ---------------------+------------------------------------------------------ Likelihood ratio | chi2_ms(19) | 19.275 model vs. saturated p > chi2 | 0.439 chi2_bs(28) | 36.295 baseline vs. saturated p > chi2 | 0.135 ---------------------------------------------------------------------------- . estadd scalar r(cfi) added scalar: e(cfi) = .96683036 . estadd scalar r(tli) added scalar: e(tli) = .95111843 . estadd scalar r(rmsea) added scalar: e(rmsea) = .02836427 The two factor model satisfies all selection criteria. . . . ******************************************************************************** . /****** Section 2 : Maximum Endorsement Frequencies ******/ . . // create local for the scale tested containing scale items . local scale1 scale1_item1 scale1_item2 scale1_item3 scale1_item4 scale1_item5 . . foreach s of local scale1 { 2. qui { 3. egen `s'_max = mode(`s') // calcuate most common answer 4. count if `s'_max == `s' 5. replace `s'_max = `r(N)' // replace with frequency of the most common answer 6. sum `s'_max 7. replace `s'_max = `s'_max/`r(N)' // divide by total number of observations for the item 8. } 9. if `s'_max > 0.8 { 10. di "Error: `s' breaches MEF criterion" // generate error message 11. } 12. } Error: scale1_item5 breaches MEF criterion . . . . ******************************************************************************** . /****** Section 3 : Comprehension ******/ . . * First, test comprehension of each scale at observation level . . // Create local for scale list containing names of scales used . local scale_list scale1 scale2 scale3 . . // Count number of missing items in a scale per observation . foreach x of local scale_list { 2. egen `x'_miss_count = rowmiss(`x'_*) 3. } . . // Create identifier = 1 if respondent has too many missing items, depending on scale size . gen miss_too_many_scale1 = (scale1_miss_count >= 2) // for scale with 4-5 items . gen miss_too_many_scale2 = (scale2_miss_count >= 3) // for scale with 6-8 items . gen miss_too_many_scale3 = (scale3_miss_count >= 4) // for scale with >8 items . . forvalues i=1/3 { 2. qui count if miss_too_many_scale`i' == 1 3. } . . // Drop observations with too many items missing in the scale . forvalues i = 1/5 { 2. qui replace scale1_item`i' = . if miss_too_many_scale1 == 1 3. } . . forvalues i = 1/8 { 2. qui replace scale2_item`i' = . if miss_too_many_scale2 == 1 3. } . . forvalues i = 1/10 { 2. qui replace scale3_item`i' = . if miss_too_many_scale3 == 1 3. } . . * Second, test comprehension at item level - example for scale 1 . . forvalues i = 1/5 { 2. qui { 3. count if mi(scale1_item`i') 4. gen prop_miss_scale1_`i' = `r(N)' 5. sum scale1_miss_count 6. replace prop_miss_scale1_`i' = prop_miss_scale1_`i'/`r(N)' 7. } 8. } . . forvalues i = 1/5 { 2. if prop_miss_scale1_`i' > 0.2 { 3. di "Item `i' does not satisfy comprehension criteria" 4. } 5. } Item 1 does not satisfy comprehension criteria . . ******************************************************************************** . . log close name: log: C:\Users\Marta\Google Drive\Drive_lab_live\Policy\Writing_\Research methods_\GDocs\Measuring Non-Cognitive Skills.txt log type: text closed on: 18 Sep 2020, 10:39:47 ------------------------------------------------------------------------------------------------------------------------------------------------