Develop a response the to post below.summarize the author’s findings and indicate areas of agreement, disagreement, and improvement.

Instructions: Reliability, Validity and Internal Consistency Peer RESPONSE DB 820

Develop a response the to post below. The reply must summarize the author’s findings and indicate areas of agreement, disagreement, and improvement. It must be supported with at least (2) scholarly citations in 7th Edition APA format and corresponding list of references.

References (in additional to the scholarly articles, you must include the following)
Morgan, G., Leech, N., Gloeckner, G., Barrett, K. (2013). IBM SPSS for Introductory Statistics. (5th Ed.). New York, NY

Refer to the chapter attached (chapter 7).

Reply to D4.1, D4.2 and D4.3 below (JA)

D4.1 Interpreting Reliability

As studies are carried out, they often have several researchers doing the evaluating and coding. Researchers must ensure that there is reliability between the way each person is doing their coding or statistics cannot be accurately run on the data. This consistency in coding is known as interrater reliability (Belur et al., 2021).

Output 7.2 in Morgan et al. (2013), shows an evaluation of interrater reliability for visualization and mosaic variables. Morgan et al. teach that the paired t test helps determine the correlation between two testers. Output 7.2 provides the results of the paired t test and the interrater reliability scores. From the Paired Samples Statistics, it can be seen that the mean of the scores for the variables are very similar. For visualization test, the mean scores were 5.24 and 5.10. For mosaic pattern, the mean scores were 27.41 and 27.48. This implies that the scoring by each researcher is very similar.

The interrater reliability can also be checked in the Paired Samples Correlations table. It shows the interrater reliability coefficients. The correlation for visualization test was .938. For mosaic pattern, the correlation was .957. These values show that if a student was marked as having a high score by the first researcher, they were also marked as having a high score by the second researcher. These scores are both high and suggest strong interrater reliability (Morgan et al., 2013).

D4.2 Interpreting Validity

Researchers must also look at measurement validity. Morgan et al. (2013) define it as being “concerned with establishing evidence for the use of a measure or instrument in a particular setting with a specific population for a given purpose” (p. 112). The validity can be measured in SPSS by running a factor analysis which was done in 7.3.

In 7.3 in Morgan et al. (2013), the loading factors of each variable can be seen in the Rotated Factor Matrix. The analysis was run so that it would exclude those with a value of less than .30. The lowest loading factor in the matrix is item09. Its loading was only .332 and was related to Factor 2. The variable was meant to measure competence but appears to be grouped in with the other variables designed to measure motivation. This variable could be removed from the factor analysis.

An example of measuring validity can be seen in an article by Belanger et al. (2019). The authors measured the validity of applying a specific, self-reported questionnaire and an observer version of the questionnaire to different types of patients.

They evaluated the validity of them for patients with dementia and depression. They noticed that one of the tests was not as valid when working with patients with dementia.

D4.3 Interpreting Internal Consistency

Cronbach’s alpha is used to evaluate the internal consistency of a scale or test (Tavakol & Dennick, 2011). This means it is trying to evaluate how interrelated the variables are and if they can be combined into a single variable. Outputs 7.4 a, b, and c are all trying to measure the internal consistency of three combined factors.

Output 7.4a is evaluating the four variables in the combined revised competence scale. It has 73 valid responses. The Cronbach alpha value is .856. Morgan et al. (2013) suggest a value above .70. The value of .856 suggests there could be strong internal consistency for the four factors making up the scale.

Output 7.4b has a Cronbach’s alpha of .798 which is also high. This suggests a strong internal consistency as well. Output 7.4c is a different story. Its Cronbach’s alpha is .688, below the .70 threshold suggested by Morgan et al. (2013). The authors suggest that this value means that there is not as much internal consistency, and the items probably should not be combined.

References

Bélanger, E., Thomas, K. S., Jones, R. N., Epstein‐Lubow, G., & Mor, V. (2019). Measurement validity of the Patient‐Health Questionnaire‐9 in US nursing home residents. International Journal of Geriatric Psychiatry, 34(5), 700-708. https://doi.org/10.1002/gps.5074

Belur, J., Tompson, L., Thornton, A., & Simon, M. (2018). Interrater reliability in systematic review methodology: Exploring variation in coder decision-making. Sociological Methods & Research, 50(2), 837-865. https://doi.org/10.1177/0049124118799372

Morgan, G. A., Leech, N. L., Gloeckner, G. W., & Barrett, K. C. (2013). IBM SPSS for introductory statistics: Use and interpretation, fifth edition. Routledge.

Tavakol, M., & Dennick, R. (2011). Making sense of Cronbach’s alpha. International Journal of Medical Education, 2, 53-55. https://doi.org/10.5116/ijme.4dfb.8dfd

Last Completed Projects

topic title academic level Writer delivered
© 2020 EssayQuoll.com. All Rights Reserved. | Disclaimer: For assistance purposes only. These custom papers should be used with proper reference.