The fact that an observed score equals the true score plus error (Classical Test Theory)<\/li>\n<\/ul>\nX = T + E<\/strong><\/p>\n\n- Variance (\u03c32) – the standard deviation squared \u2014> describes test score variability<\/li>\n
- This statistic is useful because it can be broken up into components\n
\n- True variance<\/strong> – variance from true differences<\/li>\n
- Error variance<\/strong> – variance from irrelevant, random sources<\/li>\n<\/ul>\n<\/li>\n
- If \u03c32 represents the total variance, the true variance and the error variance, then the relationship of the variances can be expressed as:<\/li>\n
- Reliability<\/strong>: the proportion of the total variance attributed to the true variance.<\/li>\n
- The greater the proportion of the total variance attributed to the true variance = the more reliable the test<\/li>\n
- The more reliable the test, the smaller the \u03c32<\/li>\n
- Because true differences are assumed to be stable, they are presumed to yield consistent scores on repeated administrations on the same tests well as on equivalent forms of tests.<\/li>\n
- Measurement error: <\/strong>all of the factors associated with the process of measuring some variable, other than the variable being measured.<\/li>\n
- g. a mathematics test administers in English to a group of Chinese \u2018whiz kids\u2019 newly arrived in America – they fail, does the test show that these are not whiz kids? possibly, but more likely their English language skills need evaluating – perhaps they did not do well because they could not read\/understand the test. \u2014> the fact that the test was written in english could have contributed in large part to the measurement error in this evaluation.<\/li>\n<\/ul>\n
<\/p>\n
The Standard Error of the Difference Between Two Scores <\/strong><\/p>\n\n- Comparisons between scores are made using the standard error of the difference \u2014> statistical measure that can determine how large a difference should be before it is considered statistically significant.<\/li>\n
- 5%, 1% (more rigorous)<\/li>\n
- The standard error of the difference between two scores can be the appropriate statistical tool to address three types of questions:\n
\n- How did this individuals performance on test 1 compare with his\/her performance in test 2?<\/li>\n
- How did this individual\u2019s performance on test 1 compare with someone else\u2019s performance on test 1?<\/li>\n
- How did this individuals performance on test 1 compare with someone else\u2019s performance on test 2?<\/li>\n<\/ul>\n<\/li>\n
- Essential that scores are converted to the same scale<\/li>\n
- The formula for the standard error of the difference between two scores is:<\/li>\n
- If we substitute reliability coefficients for the standard errors of measurement of the separate score, the formula becomes<\/li>\n
- **both tests would have to have the same SD because they must be on the same scale<\/li>\n
- The standard error of the difference between two scores will be larger than the standard error of measurement for either score alone because the former is affected by measurement error in both scores.<\/li>\n
- The value obtained by calculating the standard error of difference is used in much the same way as the standard error of the mean i.e. if we wish to be 95% confident that the two scores are different, we would want them to be separated by 2 standard errors of the difference. A separation of only one standard error of the difference would give us 68% confidence that the two true scores are different.<\/li>\n
- Example of use of standard error of the difference between two scores<\/strong>\n
\n- Situation of a corporate personnel manager who is seeking a highly responsible person for the position of vice president of safety. The personnel officer decides to use a new published test called the Safety-Mindedness Test (SMT) to screen applicants for the position. After placing an ad in the employment section of the local newspaper, the personnel officer tests 100 applicants for the position using the SMT and narrows down too the two highest scorer: Moe (score: 125), and Larry, (score:134).<\/li>\n
- Assuming the measured reliability of this test to be .92 and its SD to be 14, should the personnel officer conclude that Larry performed significantly better than Moe? To answer this question, first calculate the standard error of the difference:<\/li>\n<\/ul>\n<\/li>\n
- **in this application of the formula, the two test reliability coefficients are the same because the two scores being compared are derived from the same test.<\/li>\n
- For any standard error of the difference, we can be:<\/li>\n
- Applying this info to the standard error of the difference just computed for the SMT, we see that the personnel officer can be:<\/li>\n
- The difference between Larry\u2019s and Moe\u2019s scores is only 9 points, not a large enough difference for the personnel officer to conclude with 95% confidence that the two individuals have true scores that differ on this test.<\/li>\n
- If Larry and Moe were to take a parallel form of the SMT, the personnel officer could not be 95% confident that at the next testing, Larry would outperform Moe.<\/li>\n<\/ul>\n
THE CONCEPT OF VALIDITY<\/h2>\n\n- Validity<\/strong>: a judgment\/estimate of how well a test measures what it purports to measure in a particular context.<\/li>\n
- A judgment based on the evidence about the appropriateness of inferences drawn from test scores.<\/li>\n
- No test is universally valid \u2014> \u2018valid test\u2019 means it is valid for a particular use with a particular population of testtakers at a particular time.<\/li>\n
- Validation: the process of gathering and evaluating evidence about validity.<\/li>\n
- May gather own validation studies \u2014> local validation studies (necessary when the test user plans to alter in some way the format, instructions, language, or content of the test)<\/li>\n
- Conceptualised validity into 3 categories (trinitarian model of validity):<\/li>\n
- Content validity: <\/strong>measure of validity based on evaluation of the subjects, topics, or content covered by the items in the test<\/li>\n
- Criterion-related validity: <\/strong>measure of validity obtained by evaluating the relationship of scores obtained on the test to scores on other tests or measures<\/li>\n
- Construct validity:<\/strong> measure of validity that is arrived at by executing a comprehensive analysis of:\n
\n- how scores on the test relate to other test scores and measures<\/li>\n
- how scores on the test can be understood within some theoretical framework for understanding the construct that the test was designed to measure.<\/li>\n<\/ol>\n<\/li>\n
- Trinitarian view \u2014> construct validity = umbrella validity because every other variety of validity falls under it.<\/li>\n<\/ul>\n
Face validity<\/h3>\n\n- What a test appears to measure to the person being tested than to what the test actually measures<\/li>\n
- Face validity:<\/strong> a judgment concerning how relevant the test items appear to be<\/li>\n
- g. a paper and pencil personality test named The Introversion\/Extraversion Test, with items that ask respondents whether they have acted in an introverted or extraverted way in particular situations may be perceived by respondents as a highly face valid test<\/li>\n
- Perspective of testtaker<\/li>\n<\/ul>\n
Content validity<\/h3>\n\n- Content validity: <\/strong>describes a judgment of how adequately a test samples behaviour representative of the universe of behaviour that the test was designed to sample.<\/li>\n
- g. the universe of behaviour referred to as assertive is very wide-ranging – a content-valid, paper pencil test of assertiveness would be one that is adequately representative of this wide range.<\/li>\n
- Educational achievement tests to be content valid \u2014> proportion of material covered by the test approximates the proportion of material covered in the course.<\/li>\n
- Employment test to be content-valid \u2014> content must be a representative sample of the job-related skills required for employment.<\/li>\n<\/ul>\n
The quantification of content validity <\/strong><\/p>\n\n- Content validity important in employment settings where tests used to hire and promote people are carefully scrutinised for their relevance to the job, among other factors.<\/li>\n
- One method of measuring content validity is a method for gauging agreement among raters or judges regarding how essential a particular item is.<\/li>\n
- Lawshe (1975) proposed that, for each item, each rater respond to the following question: \u201cIs the skill or knowledge measured by this item:\n
\n- Essential<\/li>\n
- Useful but not essential<\/li>\n
- Not necessary<\/li>\n<\/ul>\n<\/li>\n
- to the performance of the job?\u201d.<\/li>\n
- If more than half the panelists indicate that an item is essential, that item has at least some content validity \u2014> greater levels of content validity exist as larger numbers of panelists agree that a particular item is essential.<\/li>\n<\/ul>\n
Content Validity Ratio (CVR):<\/strong><\/p>\n\n- Negative CVR: When fewer than half the panelists indicate \u201cessential\u201d, the CVR is negative<\/li>\n
- Zero CVR: When exactly half the panelists indicate \u201cessential\u201d, the CVR is zero.<\/li>\n
- Positive CVR: When more than half but not all the panelists indicate \u201cessential\u201d, the CVR ranges between .00 and .99.<\/li>\n<\/ol>\n
\n- If the amount of agreement observed is more than 5% likely to occur because of chance, the item should be eliminated.<\/li>\n<\/ul>\n
Culture and the relativity of content validity <\/strong><\/p>\n\n- A history test considered valid in one classroom, at one time, and in one place will not necessarily be considered so in another classroom, at another time, and in another place.<\/li>\n
- Politics may play a part in perceptions and judgments concerning the validity of tests and test items.<\/li>\n<\/ul>\n
CRITERION RELATED VALIDITY<\/h2>\n
Criterion-related validity:<\/strong> judgment of how adequately a test score can be used to infer an individuals most probably standing on some measure of interest – the measure of interest being the criterion.<\/p>\nConcurrent validity:<\/strong> an index of the degree to which a test score is related to some criterion measure obtained at the same time.<\/p>\nPredictive validity:<\/strong> an index of the degree to which a test score predicts some criterion measure.<\/p>\nWhat is a criterion?<\/strong><\/p>\n\n- Criterion<\/strong>: a standard against which a test or a test score is evaluated.<\/li>\n
- Characteristics of a criterion<\/li>\n
- An adequate criterion in relevant<\/li>\n
- Valid for the purpose of which it is being used.<\/li>\n
- Criterion is uncontaminated<\/li>\n
- Criterion contamination: <\/strong>the term applied to a criterion measure that has been based, at least in part, on predictor measures.<\/li>\n<\/ul>\n
\n- E.g. a hypothetical \u201cInmate Violence Potential Test\u201d (IVPT) designed to predict a prisoners potential for violence in the cell block. in part, this evaluation entails ratings from fellow inmates, guards, and other staff in order to come up wit ha number that represents each inmate\u2019s violence potential. After all the inmates in the study have been given scores on this test, the study authors then attempt to validate the test by asking guards to rate each inmate on their violence potential. Because the guards\u2019 opinions were used to formulate the inmate\u2019s test score in the first place (the predictor variable), the guards\u2019 opinions cannot be used as a criterion against which to judge the soundness of the test. If the guards\u2019 opinions were used both as a predictor and as a criterion, the new would say that criterion contamination had occurred.<\/li>\n<\/ul>\n
Concurrent validity<\/h3>\n\n- If test scores are obtained at about the same time as the criterion measures are obtained, measures of the relationship between test scores and the criterion provide evidence of concurrent validity.<\/li>\n
- Statements of concurrent validity indicate the extent to which test scores may be used to estimate an individual\u2019s present standing on a criterion.<\/li>\n
- g. If, for example, scores (or classifications) made on the basis of a psychodiagnostic test were to be validated against a criterion of already diagnosed psychiatric patients, then the process would be one of concurrent validation.<\/li>\n
- Once the validity of the inference from the test scores is established, a test may provide a faster, less expensive way to offer a diagnosis or classification decision.<\/li>\n
- Test with satisfactorily demonstrated concurrent validity be be appealing because \u2014> potential savings of money and time<\/li>\n
- Sometimes the concurrent validity of a particular test (test A) is explored with respect to another test (test B) – test B has been validated \u2014> how well does test A compare with test B?<\/li>\n
- Test B = validating criterion<\/strong>.<\/li>\n<\/ul>\n
Predictive Validity<\/h3>\n\n- Test scores may be obtained at one time and the criterion measured obtained at a future time, usually after some intervening event has taken place.<\/li>\n
- Intervening event may be training, experience, therapy, medication or the passage of time.<\/li>\n
- Measures of the relationship between test scores and a criterion measure obtained at a future time provide an indication of the predictive validity of the test \u2014> how accurately scores on the test predict some criterion measure<\/li>\n
- g. measures of the relationship between college admission tests and freshman GPAs provide evidence of the predictive validity of the admissions tests.<\/li>\n<\/ul>\n
The validity coefficient<\/strong><\/p>\n\n- A correlation coefficient that provides a measure of the relationship between test scores and scores on the criterion measure<\/li>\n
- g. the correlation coefficient computed from a score (or classification) on a psychodiagnostic test and the criterion score (or classification) assigned by psychodiagnosticians.<\/li>\n
- Pearson correlation coefficient<\/li>\n
- However others can be used depending on type of data, sample size, shape of distribution<\/li>\n
- Validity coefficient affected by the restriction or inflation of range \u2014> key issue whether the range of scores employed is appropriate to the objective of the correlational analysis<\/li>\n<\/ul>\n
\n- e.g. in situations where attrition in the number of subjects has occurred over the course of the study, the validity coefficient may be adversely affected.<\/li>\n<\/ul>\n
CONSTRUCT VALIDITY<\/h2>\n\n- Construct validity: <\/strong>a judgment about the appropriateness of inferences drawn from test scores regarding individual standings on a variable called a construct.<\/li>\n
- Construct<\/strong>: a well informed, scientific idea developed or hypothesised to describe or explain behaviour.<\/li>\n
- g. Intelligence<\/em> is a construct that be may invoked to describe why a student performs well in school.<\/li>\n
- Anxiety<\/em> is a construct that may be invoked to describe why a psychiatric patient paces the floor.<\/li>\n
- Constructs are unobservable, presupposed (underlying) traits that test developer may invoke to describe test behaviour or criterion performance.<\/li>\n
- The researcher investigating construct validity must formulate hypotheses about the expected behaviour of high and low scorers on the test.<\/li>\n
- If the test is a valid measure of the construct, then high scorers and low scorers will behave as predicted by the theory.<\/li>\n<\/ul>\n
<\/p>\n
\n- We don\u2019t know the true score for any individual testtaker, so we must estimate it<\/li>\n
- Best estimate available of the individual\u2019s true score on the test is the test score already obtained.<\/li>\n
- Thus, if a student achieved a test score of 50 on one spelling test and if the test had a SEM of 4, then – using 50 as the point estimate, we can be:<\/li>\n
- If the SD of a test is held constant, then the smaller the \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 , the more reliable a test will be\u00a0 as \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \u00a0increases,<\/li>\n<\/ul>\n
decreases.<\/p>\n
e.g.<\/p>\n
\n- SEM most frequently used in interpretation of individual test scores<\/li>\n
- Confidence interval: <\/strong>a range or band of test scores that is likely to contain the true score.<\/li>\n
- Calculating confidence interval e.g. 95% confidence \u2014> suppose a 22 year old testtaker obtained a FSIQ of 75. The test user an be 95% sure that this testtaker\u2019s true FSIQ falls in the range of 70 to 80. \u2014> take observed score of 75, plus or minus 1.96, multiplied by the standard error of measurement. In the test manual we find that the standard error of measurement of the FSIQ for a 22 year old testtaker is 2.37. With this info in hand, the 95% confidence interval is calculated as follows:<\/li>\n<\/ul>\n
Therefore, 70-5=70, 75+5=80 \u2014> confidence interval is 70-80<\/p>\n
The Standard Error of the Difference Between Two Scores <\/strong><\/p>\n\n- Comparisons between scores are made using the standard error of the difference \u2014> statistical measure that can determine how large a difference should be before it is considered statistically significant.<\/li>\n
- 5%, 1% (more rigorous)<\/li>\n
- The standard error of the difference between two scores can be the appropriate statistical tool to address three types of questions:\n
\n- How did this individuals performance on test 1 compare with his\/her performance in test 2?<\/li>\n
- How did this individual\u2019s performance on test 1 compare with someone else\u2019s performance on test 1?<\/li>\n
- How did this individuals performance on test 1 compare with someone else\u2019s performance on test 2?<\/li>\n<\/ul>\n<\/li>\n
- Essential that scores are converted to the same scale<\/li>\n
- The formula for the standard error of the difference between two scores is:<\/li>\n
- If we substitute reliability coefficients for the standard errors of measurement of the separate score, the formula becomes<\/li>\n
- **both tests would have to have the same SD because they must be on the same scale<\/li>\n
- The standard error of the difference between two scores will be larger than the standard error of measurement for either score alone because the former is affected by measurement error in both scores.<\/li>\n
- The value obtained by calculating the standard error of difference is used in much the same way as the standard error of the mean i.e. if we wish to be 95% confident that the two scores are different, we would want them to be separated by 2 standard errors of the difference. A separation of only one standard error of the difference would give us 68% confidence that the two true scores are different.<\/li>\n
- Example of use of standard error of the difference between two scores<\/strong>\n
\n- Situation of a corporate personnel manager who is seeking a highly responsible person for the position of vice president of safety. The personnel officer decides to use a new published test called the Safety-Mindedness Test (SMT) to screen applicants for the position. After placing an ad in the employment section of the local newspaper, the personnel officer tests 100 applicants for the position using the SMT and narrows down too the two highest scorer: Moe (score: 125), and Larry, (score:134).<\/li>\n
- Assuming the measured reliability of this test to be .92 and its SD to be 14, should the personnel officer conclude that Larry performed significantly better than Moe? To answer this question, first calculate the standard error of the difference:<\/li>\n<\/ul>\n<\/li>\n
- **in this application of the formula, the two test reliability coefficients are the same because the two scores being compared are derived from the same test.<\/li>\n
- For any standard error of the difference, we can be:<\/li>\n
- Applying this info to the standard error of the difference just computed for the SMT, we see that the personnel officer can be:<\/li>\n
- The difference between Larry\u2019s and Moe\u2019s scores is only 9 points, not a large enough difference for the personnel officer to conclude with 95% confidence that the two individuals have true scores that differ on this test.<\/li>\n
- If Larry and Moe were to take a parallel form of the SMT, the personnel officer could not be 95% confident that at the next testing, Larry would outperform Moe.<\/li>\n<\/ul>\n
THE CONCEPT OF VALIDITY<\/h2>\n\n- Validity<\/strong>: a judgment\/estimate of how well a test measures what it purports to measure in a particular context.<\/li>\n
- A judgment based on the evidence about the appropriateness of inferences drawn from test scores.<\/li>\n
- No test is universally valid \u2014> \u2018valid test\u2019 means it is valid for a particular use with a particular population of testtakers at a particular time.<\/li>\n
- Validation: the process of gathering and evaluating evidence about validity.<\/li>\n
- May gather own validation studies \u2014> local validation studies (necessary when the test user plans to alter in some way the format, instructions, language, or content of the test)<\/li>\n
- Conceptualised validity into 3 categories (trinitarian model of validity):<\/li>\n
- Content validity: <\/strong>measure of validity based on evaluation of the subjects, topics, or content covered by the items in the test<\/li>\n
- Criterion-related validity: <\/strong>measure of validity obtained by evaluating the relationship of scores obtained on the test to scores on other tests or measures<\/li>\n
- Construct validity:<\/strong> measure of validity that is arrived at by executing a comprehensive analysis of:\n
\n- how scores on the test relate to other test scores and measures<\/li>\n
- how scores on the test can be understood within some theoretical framework for understanding the construct that the test was designed to measure.<\/li>\n<\/ol>\n<\/li>\n
- Trinitarian view \u2014> construct validity = umbrella validity because every other variety of validity falls under it.<\/li>\n<\/ul>\n
Face validity<\/h3>\n\n- What a test appears to measure to the person being tested than to what the test actually measures<\/li>\n
- Face validity:<\/strong> a judgment concerning how relevant the test items appear to be<\/li>\n
- g. a paper and pencil personality test named The Introversion\/Extraversion Test, with items that ask respondents whether they have acted in an introverted or extraverted way in particular situations may be perceived by respondents as a highly face valid test<\/li>\n
- Perspective of testtaker<\/li>\n<\/ul>\n
Content validity<\/h3>\n\n- Content validity: <\/strong>describes a judgment of how adequately a test samples behaviour representative of the universe of behaviour that the test was designed to sample.<\/li>\n
- g. the universe of behaviour referred to as assertive is very wide-ranging – a content-valid, paper pencil test of assertiveness would be one that is adequately representative of this wide range.<\/li>\n
- Educational achievement tests to be content valid \u2014> proportion of material covered by the test approximates the proportion of material covered in the course.<\/li>\n
- Employment test to be content-valid \u2014> content must be a representative sample of the job-related skills required for employment.<\/li>\n<\/ul>\n
The quantification of content validity <\/strong><\/p>\n\n- Content validity important in employment settings where tests used to hire and promote people are carefully scrutinised for their relevance to the job, among other factors.<\/li>\n
- One method of measuring content validity is a method for gauging agreement among raters or judges regarding how essential a particular item is.<\/li>\n
- Lawshe (1975) proposed that, for each item, each rater respond to the following question: \u201cIs the skill or knowledge measured by this item:\n
\n- Essential<\/li>\n
- Useful but not essential<\/li>\n
- Not necessary<\/li>\n<\/ul>\n<\/li>\n
- to the performance of the job?\u201d.<\/li>\n
- If more than half the panelists indicate that an item is essential, that item has at least some content validity \u2014> greater levels of content validity exist as larger numbers of panelists agree that a particular item is essential.<\/li>\n<\/ul>\n
Content Validity Ratio (CVR):<\/strong><\/p>\n\n- Negative CVR: When fewer than half the panelists indicate \u201cessential\u201d, the CVR is negative<\/li>\n
- Zero CVR: When exactly half the panelists indicate \u201cessential\u201d, the CVR is zero.<\/li>\n
- Positive CVR: When more than half but not all the panelists indicate \u201cessential\u201d, the CVR ranges between .00 and .99.<\/li>\n<\/ol>\n
\n- If the amount of agreement observed is more than 5% likely to occur because of chance, the item should be eliminated.<\/li>\n<\/ul>\n
Culture and the relativity of content validity <\/strong><\/p>\n\n- A history test considered valid in one classroom, at one time, and in one place will not necessarily be considered so in another classroom, at another time, and in another place.<\/li>\n
- Politics may play a part in perceptions and judgments concerning the validity of tests and test items.<\/li>\n<\/ul>\n
CRITERION RELATED VALIDITY<\/h2>\n
Criterion-related validity:<\/strong> judgment of how adequately a test score can be used to infer an individuals most probably standing on some measure of interest – the measure of interest being the criterion.<\/p>\nConcurrent validity:<\/strong> an index of the degree to which a test score is related to some criterion measure obtained at the same time.<\/p>\nPredictive validity:<\/strong> an index of the degree to which a test score predicts some criterion measure.<\/p>\nWhat is a criterion?<\/strong><\/p>\n\n- Criterion<\/strong>: a standard against which a test or a test score is evaluated.<\/li>\n
- Characteristics of a criterion<\/li>\n
- An adequate criterion in relevant<\/li>\n
- Valid for the purpose of which it is being used.<\/li>\n
- Criterion is uncontaminated<\/li>\n
- Criterion contamination: <\/strong>the term applied to a criterion measure that has been based, at least in part, on predictor measures.<\/li>\n<\/ul>\n
\n- E.g. a hypothetical \u201cInmate Violence Potential Test\u201d (IVPT) designed to predict a prisoners potential for violence in the cell block. in part, this evaluation entails ratings from fellow inmates, guards, and other staff in order to come up wit ha number that represents each inmate\u2019s violence potential. After all the inmates in the study have been given scores on this test, the study authors then attempt to validate the test by asking guards to rate each inmate on their violence potential. Because the guards\u2019 opinions were used to formulate the inmate\u2019s test score in the first place (the predictor variable), the guards\u2019 opinions cannot be used as a criterion against which to judge the soundness of the test. If the guards\u2019 opinions were used both as a predictor and as a criterion, the new would say that criterion contamination had occurred.<\/li>\n<\/ul>\n
Concurrent validity<\/h3>\n\n- If test scores are obtained at about the same time as the criterion measures are obtained, measures of the relationship between test scores and the criterion provide evidence of concurrent validity.<\/li>\n
- Statements of concurrent validity indicate the extent to which test scores may be used to estimate an individual\u2019s present standing on a criterion.<\/li>\n
- g. If, for example, scores (or classifications) made on the basis of a psychodiagnostic test were to be validated against a criterion of already diagnosed psychiatric patients, then the process would be one of concurrent validation.<\/li>\n
- Once the validity of the inference from the test scores is established, a test may provide a faster, less expensive way to offer a diagnosis or classification decision.<\/li>\n
- Test with satisfactorily demonstrated concurrent validity be be appealing because \u2014> potential savings of money and time<\/li>\n
- Sometimes the concurrent validity of a particular test (test A) is explored with respect to another test (test B) – test B has been validated \u2014> how well does test A compare with test B?<\/li>\n
- Test B = validating criterion<\/strong>.<\/li>\n<\/ul>\n