Construct validity occurs when the theoretical constructs of
cause and effect accurately represent the real-world situations
they are intended to model. This is related to how well the
experiment is operationalized. A good experiment turns the theory
(constructs) into actual things you can measure.It defines how well
a test or experiment measures up to its claims. A test designed to
measure depression must only measure that particular construct, not
closely related ideals such as anxiety or stress.It includes:-
Convergent validity tests that constructs that are expected to
be related are, in fact, related.
Discriminant validity tests that constructs that should have no
relationship do, in fact, not have any relationship;also referred
to as divergent validity.
Content validity occurs when the experiment provides adequate
coverage of the subject being studied. This includes measuring the
right things as well as having an adequate sample. Samples should
be both large enough and be taken for appropriate target
groups.Content validity is related very closely to good
experimental design.
For example, an educational test with strong content validity
will represent the subjects actually taught to students, rather
than asking unrelated questions.
Content validity is often seen as a prerequisite to criterion
validity, because it is a good indicator of whether the desired
trait is measured.
Face validity occurs where something appears to be valid. This
of course depends very much on the judgment of the observer. Face
validity is a measure of how representative a research project is
‘at face value,' and whether it appears to be a good project.It
requires a personal judgment, such as asking participants whether
they thought that a test was well constructed and useful.
The difference with content validity is that it carefully
evaluated, whereas face validity is a more general measure and the
subjects often have input.
An example could be, after a group of students sat a test, you
asked for feedback, specifically if they thought that the test was
a good one. This enables refinements for the next research project
and adds another dimension to establishing validity.
Criterion Validity assesses whether a test reflects a certain
set of abilities.It includes concurrent and preditive
validity.
Concurrent validity measures the test against a benchmark test
and high correlation indicates that the test has strong criterion
validity.A new intelligence test, for example, could be
statistically analyzed against a standard IQ test; if there is a
high correlation between the two data sets, then the criterion
validity is high. This is a good example of concurrent
validity.
Predictive validity is a measure of how well a test predicts
abilities. It involves testing a group of subjects for a certain
construct and then comparing them with results obtained at some
point in the future.The most common use for predictive validity is
inherent in the process of selecting students for university.