Study
Questions for Quiz #1
(Remember that the parenthetical portions of the questions
will not appear on the Quiz! They are
there to help you identify and shape the content of a complete answer!)
Research Goals & Hypotheses
1. Describe the fundamental difference between
the intent and the practice of hypothesis testing research. The intent of
psychological research is to provide definitive results that prove or disprove
causal hypotheses about relationships between psychological constructs, so that
the results can be broadly applied.
2. Compare and contrast the "critical experiment" and "converging operations" approaches to acquiring knowledge. Which is the current paradigm of psychological research, and why? Describe the research loop (be sure to tell each stage) and tell how it is related to “converging operations.”
3. Respond to the statement, "Once I've got
my Ph.D. I'm never going to have to worry about research again!!!"
4. Describe the three different goals or types of
knowledge sought by psychological researchers. How are these types
interrelated?
5. Give a general definition of a research
hypothesis and tell its most important property. Describe the three types of hypotheses and
tell what kinds of evidence (conditions) must be supplied to support each. (Be sure to mention which type of hypothesis is tied to each of the "goals of the scientific
method".)
External
Validity
6. Describe the (four) basic types of validity we want our research conclusions to have.
7. Distinguish among the components of external validity, the types of variables associated with each, and tell how we provide evidence for each
8. Describe "cultural" and "ecological" validity and tell how they relate to the components of external validity we've discussed.
9. Describe the different approaches to "defending" the external validity of a study.
10. Why is it important to distinguish between "generalizability" and "applicability" as possible synonyms of external validity? What does it mean to say, "External validity is in the eye of the applier."? Do you agree with this statement? Why or why not?
11. What is "participant sampling" and what are the choices a researcher makes when designing a participant sampling methodology? How does sampling influence the validity of the study?
34. How does the study of external validity inform our understanding of “sampling”? What are the “kinds of sampling” that we must intentionally engage when planning our research and data collection?
12. What is required to have a truly random sample? Is this often accomplished? When you are told that a sample is "random," what is usually meant by this?
13. How does sample size relate to internal and
external validity? What must we consider
when selecting the sample size for our study?
14. Differentiate between the
"selection" and "assignment" of subjects and describe the
purpose and procedures used for each.
Internal Validity
15. Describe the inter-relationships among the components of these three distinctions among the measures/behaviors studied by researchers: 1) constant vs. variable, 2) measured vs. manipulated, and 3) cause vs. effect.
16. Describe the variables/constant that
"exist before the study" and those that "exist after the
study". What is the moral of this distinction?
17. Distinguish among the components of internal validity, the types of variables associated with each, and tell how we provide evidence for each. (You'll want to include in your answer the material about how length of the study can influence ongoing equivalence -- from the end of the next lecture.)
18. Describe how we
know what “confounds” are involved and the “standards” for their control when
designing our research or evaluating the research of others.
19. Distinguish between vs. within-groups designs
and tell how random assignment is applied to each.
Tell the "unacceptable" procedures for participant assignment. Why are these all unacceptable?
20.
What is
counterbalancing? For what kinds of
designs is it applied? Describe why not
counterbalancing or “unsuccessful counterbalancing” (when the procedure doesn’t actually give you equivalence across conditions) is
an issue of initial equivalence (and not ongoing equivalence). When, for a WG design, is counterbalancing
not necessary to support causal interpretation of the results? Why isn’t it?
21. Describe the different uses of random
assignment and tell what aspects of internal and external validity are enhanced by each. Tell the kinds of
“attrition” and what aspects of validity each is a problem for and why.
22. Distinguish the
different characterizations of the relationship
between internal and external validity. (Please note: this question is NOT about internal vs external! If about two different ways of thinking about
how they are related to each oher!!)
Research
Designs
23. Describe the key components of
a true experiment and how each contributes to the internal and the external
validity of a study.
24. Distinguish between the different meanings of
"IV" and describe why we have to be careful when applying the term.
25. Can all causal research
hypotheses be studied? Why or why
not? (Be sure to give examples to
support your answer!)
26. Distinguish among the major types of research
designs - focusing on the procedural differences among them. What are the relative advantages of these
different designs to support internal and external validity claims for our
research?
27. Suppose a colleague said to you, “Why even
bother running non-experiments? We can’t
get any useful information from them!” What seems to be the type of information
this colleague thinks is the only useful kind? How should you respond to this
statement?
Data
Collection & The Research Process
28. Describe the relative advantages of
observational and self-report data.
29. Describe the relative advantages of
laboratory, structured and field settings to promote the validity of the
research (be sure to refer to both internal and external validity)
30. Respond to the statement, “Observational and
survey research can not provide information about
causal relationships. Only experimental
research can do that!”
31. Briefly describe the (six) key steps in the
research process, telling the information or evidence provided by each. (Be sure to identify those steps
which are only necessary for testing causal research hypotheses).
32. What would you look for if handed an empirical
research article and asked if the research it reports is valid?
33. Distinguish between the attributes of a
research study that directly influence the causal interpretability of the
results and which do not
directly influence the causal interpretability. What are the attributes of a research study
that make it difficult to ensure ongoing equivalence and for what part of
internal validity are they a problem?