Measurement
Validity
- It is important to pre-test questions to make sure they
are clear, measure the concept that you are attempting to collect
data on, and are reliable and valid.
- Measures of validity and reliability are critical to know
whether your instrument was good.
- Reliability means that answers to the question are stable
over time and do not vary because of the question itself.
- The wording of questions can mean different things to
different people, so testing for reliability is important.
- Stability reliability – a consistency of answers across
time. Administering the same survey twice to people with time
in between is the best way to measure this.
- Representative reliability – consistency across different
groups. This can be tested by using either sub-groups or different
groups and measuring responses based on other differences within
the group.
- Equivalence reliability – using different indicators
(questions) of a similar concept to verify the questions individually.
- Validity is how well the question represents the concept
being studied. Well-defined and specific indicators help the creation
of valid questions.
- Face validity is the acceptance of the scientific community
that a certain indicator represents a concept.
- Content validity goes past face validity and requires
that the measure of indication fully represents the concept.
- Criterion validity uses another measure to compare with
the new measure.
- This can be done in a Concurrent way, where the
results of the new test are compared to an already accepted
valid measure.
- Or in a Predictive way where the results are used
to predict other cases to see if the measure is valid.
- Construct validity is used if multiple indicators are
used, and it determines whether the multiple indicators come up
with a similar outcome.
- Convergent validity is when similar indicators get
similar results.
- Discriminant validity is when opposing indicators
get opposing results.

|