Impact Factor 2.089

The world's most-cited Multidisciplinary Psychology journal

Opinion ARTICLE Provisionally accepted The full-text will be published soon. Notify me

Front. Psychol. | doi: 10.3389/fpsyg.2018.02295

Teaching Psychology Research Methodology across the Curriculum to Promote Undergraduate Publication: An Eight-Course Structure and Two Helpful Practices

  • 1Department of Psychology, Bishop's University, Canada

Teaching research methods is especially challenging because we not only wish to convey formal knowledge and encourage critical thinking, as with any course, but also to enable our students dream up meaningful research projects, translate them into logical steps, conduct the research in a professional manner, analyze the data, and write up the project in APA style. We also wish to spark interest in the topics of research papers, and in the intellectual challenge of creating a research report, but we have learned just how difficult these goals can be from teaching undergraduates and from serving as journal reviewers: many submissions contain flaws such as elementary errors of logic (e.g., using a null control condition instead of a placebo or dummy treatment), tangled statistics, a lack of graphs, and ungrammatical, unclear writing that violates APA rules. Yet these manuscripts are written by university faculty, often with doctorates and years of experience. Even published papers may contain egregious faults (Standing & McKelvie, 1986). And although we have both published widely, we still hone our skills. It requires optimism to expect that a typical undergraduate will do better, on the basis of just a year or two of studies in psychology. In this paper, we describe a systematic set of methodology courses and two specific practices that we think can help.

Methodology Courses as the Backbone of our Psychology Program
How can methodology courses promote undergraduate involvement in publishable research? In our undergraduate liberal arts institution in Québec, where the Bachelor’s degree normally takes three years following two years of college, we ask for more, rather than less: our psychology program has evolved since the 1960s to require a solid backbone of mandatory methods-related courses that is considerably more extensive than in most universities (McKelvie, 2000). Psychology majors /take two consecutive introductory statistics courses in the first academic year, reaching the level of two-way ANOVA. Simultaneous with the second course, they take an introductory research methods course with lectures and discussion of important concepts, including theory, and involvement in research projects. In the second year, intending honors students must pass an advanced methods course that builds on the first one. It uses the same text, and continues active participation in project work. An unusual requirement of the program (McKelvie, 2000) is a course in Psychometrics and Psychological Testing, reflecting our belief that more attention should be paid to measurement. After second year, if students have achieved a program average of 80% or better, and a combined average of 75% in the advanced research methods course plus the second statistics course, they may enter the Honors program. In third year, they take an advanced (multivariate) statistics course, and produce an idea for a data-based Honors thesis under the direction of a main and a secondary supervisor. Students are strongly encouraged to create their own research question on a topic of their choosing. Over two semesters, they discuss this project in a seminar course and write a formal proposal then, in the thesis course, they conduct the research and write the report.

These eight required methods-related courses produce well-grounded and motivated honors graduates, and give them an opportunity to publish. Our students accept with good grace that to complete this program represents a challenge, as only about one-fifth of them obtain an honors degree rather than a major in psychology, and our departmental numbers have risen considerably over the years.

Traditional Solutions for Teaching Students to Grapple with Research Methods
A traditional solution in teaching research methods and exposing students directly to the research process has been first to lead them through a series of short, pre-packaged lab projects which demonstrate some well-established phenomena, and require brief write-ups, likely in APA format. This approach still has merit at the introductory stage, and the Online Psychology Laboratory experiments provided by the APA ( are valuable exercises for the neophyte. However, it appears that more emphasis is commonly placed today on original project work, which will be the case as the student plans an honors thesis, where they commonly choose the topic.
The basic problem is that too many original projects proposed by students lack a valid point to test, coherent methods, and a valid formal design. Additionally, sample sizes are almost guaranteed to be low, leading to inadequate statistical power. More fundamentally, students habitually gravitate to correlational relationships rather than to randomized controlled experiments. They must learn to better justify nonexperimental research.

Our Practices
Class discussion of published articles (first methods course). One practice to spark student interest in research papers and to help them design better studies involves critical class discussion of published articles (McKelvie, 1994, 2013). The papers are carefully chosen to capture student attention and to expose them to methodological issues. Study questions focus on important points in each reading. Students are also encouraged to generate their own critical comments and queries (McKelvie, 2013). This approach sits well with the typical case-oriented contents of leading methods texts (e.g., Morling, 2018).

Five study questions are common to all papers (McKelvie, 2013): What type(s) of research method is (are) involved? In particular, is it a true experiment? What inferential statistics were used? Were they appropriate? What is (are) the sources of the problem (theory, past research, practical intervention, everyday life)? One example is Motley and Camden’s (1985) study of sexual double entendres in lexical selection. It employed both experimental (manipulation of experimenter attractiveness) and non-experimental methods (sexual anxiety as a subject variable). Independent samples t-tests, chi square and ANOVA were used. The study was based on theory and on everyday life. Another example is Milgram’s (1963) seminal observational study of obedience. It is nonexperimental, only contains descriptive statistics and is based on everyday life. Students find it challenging to identify the research method. Realizing that “laboratory” does not mean “experiment” is a valuable lesson.

Replication projects (second methods course). Although the traditional laboratory approach has merit, the projects may only be demonstrations or suffer from inadequate sample size.
One solution is for all the class to work on the same project, created by the instructor in an area of their expertise, and perhaps involving original research. This approach means that sample sizes will be adequate, and perhaps enable publishable research. Alternatively, the instructor can plan a replication exercise, selecting a paper from the literature that is widely quoted and of manageable scope, and leading the class through either a conceptual or (better) an exact replication of the target study. This means that a rationale for the study exists, the method is pre-established, and the sample size is likely to be healthy. Student involvement in research methods is never automatic, but it may be promoted by the realization that one can grapple with the same issues as published authors. Students seem to enjoy the chance to do ‘real’ research that is potentially publishable.

The parallel teams approach. A class may be divided into teams which each work on a different target article. Preferably, the teams can each try to replicate the same paper. This has the advantage of maximizing N, and we note that replications to be adequately powered should use more than the number of participants listed in the target article (Simonsohn, 2015). Additionally, the team results can be compared for consistency, before pooling the data to increase power. When the teams are of perhaps half a dozen members each, they can function most effectively as small, cohesive groups under the direction of an elected or designated team leader (for practical details see Standing, 2016).

An example of a project using four parallel replication teams is described by Standing, Astrologo, Benbow, Cyr-Gauthier, and Williams (2016). This successfully replicated Experiment 8 in the study by Gailliot, Baumeister, DeWall, Maner, Plant, Tice et al. (2007), which made the controversial claim that self-control can be enhanced by consuming a glucose rather than a placebo drink. As possible authorship is motivating to many students, the four team leaders here were included as coauthors of the instructor, with the remaining members of the class acknowledged in a footnote. Alternatively, all members of a class team may be listed as authors, as in the case of a previous attempt focusing on the claim that priming a participant with a trait such as ‘intelligence’, or a stereotype such as ‘professor’, raises their cognitive performance in the form of Trivial Pursuit scores (Dijksterhuis, & van Knippenberg, 1998). This replication attempt did not succeed (Roberts, Crooks, Kolody, Pavlovic, Rombola, & Standing, 2013). Subsequently both a 9-experiment study (Shanks, Newell, Lee, Balakrishnan, Ekelund, Cenac, et al., 2013), and a preregistered replication study, involving 40 labs, have concurred since they too failed to replicate the target study's results (O'Donnell et al., 2018).

The results of student replication projects are most effectively communicated by posting them as summaries on the website, which provides a refereed 'Archive of Brief Reports of Replication Attempts in Experimental Psychology'.

We see the major challenges in teaching research methods as a limited ability to build a study that has a clear prediction with rival hypotheses, and to think clearly and logically through key issues such as randomization, control conditions, double-blind testing, counterbalancing, power, sample size, experimenter effects, and demand characteristics (for details, see McKelvie,1994). Another challenge is to have the student exert tighter controls in non-experimental studies (e.g., match groups on subject variables and include a dependent variable on which no difference is expected). Even more fundamentally, we must ask each student to propose a study that is valid, interesting, and has a point to make, one that connects with previous work. We think that the instructor must confront these issues explicitly, in class and in personal interactions with the student. Another major issue is that of obtaining prior approval from institutional research ethics boards, which requires careful time scheduling and attention to detail in the required documentation. Since a study must be planned ahead of time, pilot testing is vitally needed, and this testing itself may require ethics approval, leading possibly to an infinite logical regress unless common sense is applied.

The problems of writing skills and APA format are also pervasive. We have found it useful to break reports into sections (staggered over time), to allow resubmission (after editing by the instructor), and to encourage students consciously to imitate the format of the APA model manuscript (APA, 2010, pp. 41-53), rather than to memorize formal rules. Students also receive a detailed handout explaining these rules.

In addition to the replication papers, the present approach, developed over several decades, has yielded many PsycINFO-listed refereed publications on a variety of topics, with undergraduates as coauthors: 71 and 50 papers by the two present authors, respectively (e.g., Benmergui, McKelvie, & Standing, 2017; Clohecy, Standing, & McKelvie, 2015; Knight & McKelvie, 1986; Martel, McKelvie, & Standing, 1987; McKelvie & Demers, 1979; Morin-Lessard & McKelvie, 2017; Shackell & Standing, 2007; Sigall & McKelvie, 2012; Standing, Aikins, Madigan, & Nohl, 2014; Standing, Verpaelst, & Ulmer, 2008). Students are often given primary authorship, even if we lead the writing. In one thesis project, (Benmergui et al., 2017), the first purpose reflected the class replication exercise: to replicate a report (Beauchamp, 2002) that false recall in the Deese-Roediger-McDermott-Read-Solso (DRMRS) procedure would be smaller when the materials were presented as pictures rather than words. In this procedure, items on a to-be-remembered list are constructed around a central theme that is not on the list (e.g. thread, pin, sewing around needle). False memory occurs when the theme word (“needle”) is recalled. This replication was successful. The experiment also extended previous research by including a condition with words and pictures together, a recognition memory test, a measure of confidence and by investigating the effect of list length.

Involving undergraduates in the publication process is not easy, but we argue that the present seeming success is due in part to our cumulative course structure, to the explicit identification and discussion of challenges and to the two systematic exercises outlined here.

Keywords: Teaching, Research Methods, Course structure, Critical discussion, Replication

Received: 21 Sep 2018; Accepted: 02 Nov 2018.

Edited by:

Traci Giuliano, Southwestern University, United States

Reviewed by:

Katherine L. Goldey, St. Edward's University, United States  

Copyright: © 2018 McKelvie and Standing. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Dr. Stuart McKelvie, Bishop's University, Department of Psychology, Sherbrooke, Canada,