NCE Ch 7

Types of clinical interviewing
structured
semi-structured
unstructured
types of informal assessment
observation of behavior
rating scales
classification techniques
records and personal documents
types of personality assessment
standardized tests (MMPI)
projective tests (TAT
interest inventories (Strong)
types of ability assessment
achievement tests (WRAT)
aptitude tests (SAT)
intelligence tests (WISC)
Jean Esquirol
developed forerunner of verbal IQ
recognized mental retardation was related to developmental deficiencies not mental illness
Eduoard Seguin
developed the form board, improved motor skills for indls w. mental retardation
Sir Francis Galton
developed first test of intelligence
looked at relationship btw rxn time, grip strengths and intelligence
William Wundt
first psych lab
James Cattell
applied statistical concepts to psych assessment
Hermann Ebbinghaus
studied human memory and the forgetting curve
Alfred Binet
first modern intelligence test, Binet-Simon scale
Lewis Terman
revised the Binet-Simon scale – became the Stanford-Binet
incorporated IQ
Intelligence quotient (IQ)
chronological age divided by mental age
Robert Yerkes
developed Army Alpha and Beta
Army Alpha vs. Army Beta
both were intelligence tests to screen cog abilities of military recruits
Alpha – original
Beta- language free version
James Bryant Conant
developed the Scholastic Aptitude Test (SAT)
Edward Thorndike
developed the Stanford Achievement Test, first achievement test battery to measure academic performance of lg grps of students
MMPI
Minnesota Multiphasic Personality Inventory – objective measurement of personality

The updated version is widely used to diagnose psychopathology

Jung, Rorscach, Murray
all created well known projective techniques
(Jung’s word associations, Rorschach’s inkblots, Murray’s Thematic Apperception Test)
Frank Parsons
father of vocational guidance and counseling
Edward Strong
Strong Vocational Interest Blank — became Strong Interest Inventory
assessment vs. test
assessment: broad term, systematic process of gathering and documenting client info

test: subset of assessment, yields data regarding an examinee’s responses to test items

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now
interpretation vs. evaluation
interpretation: part of assessment process, prof counselor assigns meaning to the data yielded by evaluative procedures (comparing indl to peer grp, using predetermined standard/criteria or professional judgement)

evaluation: making a determination of worth or significance based on the result of a measurement (ex. using Beck Depression Inventory to evaluate client’s progress in counseling)

Power tests
limit perfect scores by including difficult test items that few indls can answer correctly
maximal performance test
used if a counselor wants info regarding the client’s best attainable score/performance

ex. achievement and aptitude tests

typical performance test
concerned w/ one’s characteristic or normal performance

ex. personality measurements

Standardized tests vs. non-standardized tests
Standardized tests are valid and reliable, indl scores can be compared to a norm grp, ex. SAT and GRE

non-standardized tests are interpreting solely on the counselors judgement ex. projective personality measures

OBjective vs. subjective tests
objective tests provide consistency in administration and scoring to ensure the examiner’s own beliefs/biases don’t interfere (often have a “right” answer)

subjective tests are sensitive to rater and examinee beliefs (ex. open-ended questions)

Standards for Educational and Psychological Testing
approp and ethical use of tests
developed by APA, AERA (am. ed. research assoc.), NCME (natl council on measurement and ed)
Responsibilities of Users of Standardized Tests (RUST)
policy statement published by AACE (assoc for assess in counseling and ed), a division of ACA

ensure ACA members use standardized tests in accurate, fair and responsible manner

Joint Committee on Testing Practices (JCTP)
collaboration btw AERA, APA, NCME, and ACA
publishes doc concerning testing in education psychology and counseling:

1. Rights and Responsibilities of Test Takers
2. Test User Qualifications
3. Code of Fair Testing Practices in Education

Civil Rights Act of 1964
assessments used to determine employment must be relate strictly to the duties outlined in the job description and cannot discriminate based on race, color, religion, pregnancy, gender or origin
Family Educational Rights and Privacy Act of 1974 (FERPA)
Ensures the confidentiality of student test records by restricting access to scores. Affirms the rights of both student and parent to view student records.
Individuals with Disabilities Education Improvement Act of 2004
Confirms rights of students believed to have a disability to receive testing at the expense of the public school system. Students w. disabilities receive an IEP that specifies accommodations a student gets to optimize learning
The Vocational and Technical Education Act of 1984
Carl Perkins Act

provides access to vocational assessment, counseling, and placement services for the economically disadvantaged, those w. disabilities, indls entering nontraditional occupations, adults in need of vocational training, single parents, ESL, and incarcerated indls

Americans with Disabilities Act of 1990 (ADA)
employment testing must accurately measure a person’s ability to perform pertinent job tasks w/o confounding the assessment results w. a disability, Act ensures that personal with disabilities receive approp accommodations during test administration
Health Insurance Portability and Accountability Act of 1996 (HIPAA)
secures the privacy of client records by requiring agencies to obtain client consent before releasing records to others, grants clients access to their records
Larry P. v. Riles
ruled that schools had used biased intelligence tests, which led to an over-representation of African American children in programs for students w. educable mental retardation.

Counselors must provide written documentation that demonstrates use of nondiscriminatory and valid assessment tools.

Diana v. California State Board of Ed
requires that schools provide tests to students in their first language as well as in english to limit linguistic bias.
Sharif v. New York State Educational Dept
SAT alone cannot be used to determine scholarship award
Griggs v. Duke Power Company
ruled that assessments used in job hiring and promotion process must be job related
Bakke v. California
barred use of quota systems for minority admissions procedures in US colleges and universities
Mental Measurement Yearbook (MMY)
contains basic info on test, reliability, validity, and critiques by experts
Tests in Print
pub by Buros Institute of Mental MEasurement

comprehensive list of all published and commercially available tests in psychology and education, does not provide critical reviews

Tests
published by Pro-Ed, contains concise instrument information, does not include critiques, validity or reliability
Test Critiques
published by Pro-Ed, companion to Tests

contains info re: reliability and validity, but in simple language

Validity
how accurately an instrument measures a given construct

– what an instrument measures
– how well it does so
– the extent to which meaningful inferences can be made from the instrument’s results

content validity
extent to which an instrument’s content is approp to its intended purpose

test items must reflect all major content areas covered by the domain
must contain items that measure the physical, psychological and cognitive factors of the domain
number of test items covering ea. content area must represent the importance of the content in the domain

criterion validity
effectiveness of an instrument in predicting an indl’s performance on a specific criterion

there are two types of criterion validity: concurrent and predictive validity

Concurrent validity
a type of criterion validity

concerned w. the relationship btw an instrument’s results and another currently obtainable criterion

For example: give depression instrument to a grp of adults and keep track of the number of times those adults were admitted to the hospital for suicide ideation, wld expect to see relationship btw score on depression inventory and hospital visits.

predictive validity
a type of criterion validity

examines relationship btw the instrument’s results and a criterion collected in the future

ex. a depression inventory that predicted hospitalization for suicide ideation in the upcoming year

construct validity
extent to which an instrument measures a theoretical construct (idea or concept)

det by experimental designs, factor analysis, and convergence w. other similar measures and discrimination with other dissimilar measures

experimental design validity
type of construct validity

use scientific study to prove your instrument works

factor analysis
statistical technique that analyzes the interrelationships of an instrument’s items

must show that the instrument’s sub-scales are statistically related to ea. other (but not too closely) and the larger construct

convergent validity
shows that the instrument is related to what (theoretically) it shld be

ex. you cld correlate a depression inventory w. the Beck Depression Inventory

discriminant validity
est when measures of constructs that are not theoretically related are observed to have no relationship
what is face validity
not a real type of validity

whether an assessment LOOKS like its valid or credible

validity coefficient
correlation btw test score and the criterion measure
regression equation
predict an indl’s future score based on a current criterion

ex. predict GPA based on SAT score

standard error of estimate
statistic that indicates the expected margin of error in a predicted criterion score due to the imperfect validity of the test
decision accuracy
accuracy of instrument in supporting counselor decisions
reliability
consistency of scores attained by the same person on dif administration of the same test

ideally we’d like the score to be the same

spearman-brown formula
compensates mathematically for shorter length when determining split-half reliability
Two formulas for interitem consistency
Kuder-Richardson (for use when responses are dichotomous – yes/no)
Cronbach’s Coefficient Alpha (test items result in multipoint response like a Likert scale)
reliability coefficient
closer the coefficient is to 1 the more reliable the scores

typically range from .8-.95

standard error of measurement
used to estimate how scores from repeated administrations of the same instrument are distributed around the true score
standard error of measurement
used to estimate how scores from repeated administrations of the same instrument are distributed around the true score
classical test theory
an indls observed score is the sum of the true score + the amt of error present during test administration
item response theory
applying math. models to test data collected to evaluate how well indl test items and the test as a while work, detect item bias
nominal scale
NAME
categories like male/female
ordinal
ORDER
rank-order data like a LIkert scale, 1st, 2nd, 3rd
interval scale
measure equal distance from ea. pt on the scale (for ex. temperature), has no true zero (you can’t have no temperature), IQ tests
ratio scale
has a true zero, ex. height (you can have no height)
criterion-references assessment
provide info on an indl’s score by comparing it to a predetermined standard or set of criteria

ex. 90-100 is an A

ipsative assessment
indl’s test score is compared against a previous test score
z-score
most basic type of standard score, has a mean of 0 and standard dev of 1
t-score
standard score that has an adjusted mean of 50 and stnd dev of 10
stanine
“standard nine” type of standard score used on achievement tests, represent a range of z-scores and percentiles, mean of 5 and std dev of 2
normal curve equivalents (NCE)
dev by USDE
used in ed com to measure student achievement
range is from 1 to 99, indicate how indl ranked in relationship to peers
mean = 50, std dev = 21.06
age-equivalent scores
type of developmental score that compares an indl’s score w. the avg score of those of the same age
grade- equivalent score
type of developmental score that compares an indl’s score w. the avg score of those at the same grade level
Types of bias in an assessment
examiner bias: examiner’s beliefs/behavior influence the test administration
interpretive bias: examiner’s interpretation provides unfair advantage/disadvantage to the client
response bias: c

Leave a Reply

Your email address will not be published. Required fields are marked *

x

Hi!
I'm Erick!

Would you like to get a custom essay? How about receiving a customized one?

Check it out