Skip to main content

OPINION article

Front. Appl. Math. Stat., 24 November 2017
Sec. Quantitative Psychology and Measurement
Volume 3 - 2017 | https://doi.org/10.3389/fams.2017.00025

Doctoral Education in Quantitative Research Methods: Some Thoughts about Preparing Future Scholars

  • Quantitative Data Analysis and Research Unit, School of Learning, Development and Professional Practice, Faculty of Education and Social Work, University of Auckland, Auckland, New Zealand

Understanding social science domains is difficult because of the complex nature of the domains and the consequential challenges in capturing and analyzing data. To address such challenges, many analytic techniques have been and continue to be developed [1]. This means that social science fields like education, despite the perception that they are “soft” sciences, may be the hardest fields to research [24]. Within social sciences data can be either numeric or text, requiring statistical and qualitative techniques of data collection and analysis [57]. Perhaps post-structuralist or post-modern approaches [8] to data cannot or should not be automated or statistically analyzed. However, while analysis of qualitative data is normally undertaken by humans, automation of the analytic process for these data is being developed (e.g., statistical discourse analysis; [9]). Such developments suggest that the future of social science research may bring greater synchronization between these approaches to data.

Consequently, this opinion piece focuses upon the teaching of quantitative and statistical methods in the doctoral degree. In this piece, I first provide evidence for the complexity of social science research. Then I consider the challenges and issues in our current arrangements in doctoral education in the field. I conclude with some tentative solutions for improving doctoral education in quantitative methods. Unsurprisingly, it is my opinion that we need to admit that it is not evident how to balance the quantitative research methods curriculum such that it both prepares a wide variety of doctoral students for their careers as analysts, researchers, or scholars and does justice to the complex nature of reality and scientific investigation.

The Complexity of Social Science Research

To exemplify the complex nature of social science data and the types of techniques required, consider the impact evaluation of the Head Start program. Head Start began in 1965 with the aim of boosting the school readiness of low-income children in the United States [10]. The impact study examined nearly 5,000 children (ages 3 and 4) in two-separate cohorts randomly assigned to Head Start or a control program and followed for 4 years. The cohorts were not equivalent in racial/ethnic characteristics and the control groups did not have equivalent and common alternative programs. The children were in 23 different states served by 84 randomly selected agencies, and 383 randomly selected Head Start centers were involved. The outcomes of children were examined at multiple times in multiple domains. Data were collected from multiple stakeholders using a variety of methods. Despite randomisation, not all children enrolled or remained in their assigned condition.

A number of advanced statistical techniques were used. To make adequate generalizations from the sample to the population, sampling weights were calculated for each child [11]. Hierarchical or multilevel analysis [12] was used to take into account that students were nested in centers in different states and such grouping influenced the nature of their experience. Unsurprisingly, about 20% of children were absent at various data collection points; hence, missing data analysis was needed [13]. Determining the effect of change in the various measures required techniques for repeated measures analysis [14, 15].

Thus, answering a relatively straightforward question (i.e., How much impact does Head Start have?) required complex data design, data collection, and sophisticated data analyses to account for multiple confounding and interacting factors. This creates substantial challenges for researchers and those involved in preparing future researchers in doctoral programs. Doctoral education in the social sciences has yet to fully resolve what students need to be taught in quantitative research skills to be able to cope with life beyond their own degree. On the one hand, students need a deep, but probably relatively restricted, knowledge and skill set in order to make an “original and substantive contribution.” On the other hand, once graduated and employed in the field of social science research, junior researchers need a broad repertoire of analytic skills [1, 4, 16] to resolve the challenges and opportunities arising from the availability of large scale data [1719].

Most new graduates are unable to implement many techniques beyond those required for their own dissertations or thesis work [20]. While it is expected that new graduates will develop a more sophisticated suite of capabilities in their early work, this depends entirely on the work environment in which they are employed. It may be more common than not, that new doctorates have little opportunity to practice what they have learned or extend their competencies. Of course, this self-limiting reliance on already known methods applies equally to tenured and experienced researchers, who having successfully established a reputation within a field, may restrict themselves to what they already know how to do [21], even if the data might be better addressed with alternative methods. This means that doctoral education in the social sciences and which seeks to prepare graduates for competence in quantitative research methods faces problems in terms of both curriculum (what to teach) and pedagogy (how to teach).

Difficulties in Teaching Quantitative Methods

Social science studies that have to address large and/or complex data are different to the relatively simple world of small sample laboratory experiments involving measurements either side of an intervention. In that situation, relatively simple statistical measures are sufficient; the difference in mean can be tested for statistical significance using a t-test or ANOVA F-test; the practical size of the mean score difference can be evaluated using an effect size adjusted by the inter-correlation of the dependent variable scores. In true experiments, no sophisticated procedures are needed to estimate counter factual data, the control data exists. Hence, education and social science students have an immense mountain to climb, relative to perhaps the statistics needed in a robust psychology experimental study. While the experiment may be the default method, a further complicating factor that hinders the adoption of advanced, yet appropriate, analytic methods might lie in the preference of funders and sponsors of research for easy to understand results. Without necessarily corrupting the quality or independence of research, stakeholders who need to deal with the public (i.e., politicians) may not welcome complicated analyses and may discourage the use of appropriate techniques that are harder to explain [22].

Also complicating the field of quantitative research methods are issues to do with the nature of measurement and variables used to operationalize complex phenomena. For example, operational definitions of latent phenomena (e.g., intelligence) are implemented through psychometric statistics (e.g., factor analysis, item response theory) of manifest variables that measure latent constructs [23, 24]. However, this approach is contested, especially by Michell [25] who argues that psychometric processes do not conform to scientific standards of measurement (e.g., there are no inherent units of measurement for latent constructs like intelligence and the tools used to measure do not have additive structure). While psychological constructs may not be inherently measurable, Borsboom and Mellenbergh [26] suggested item response theory modeling may overcome some of these criticisms. Humphry [27] suggested that the problem can be overcome in part by conceiving of responses to psychometric measures over a standardized time period as creating a constant scale or continuum on which individuals vary on a domain theoretically aligned to the data collection items. The debate about the properties of psychometric measurements and consequently how various measures ought to be statistically analyzed is not yet resolved.

However, it is worth remembering that not all variables in social sciences and education are so problematic because the relevant phenomenon is directly observable (e.g., attendance, truancy, tardiness, standardized test score performance). If test scores are used to describe abilities on specific tests (e.g., reading comprehension or mathematics) and no attempt is made to infer from the observed performance to a latent construct (e.g., intelligence), perhaps this debate becomes moot. Nonetheless, each researcher needs to develop a stance toward the properties of the tools used to create variables whose means and variations are used to make claims about social realities. Consequently, all researchers need a deep understanding of the philosophic underpinnings of scientific and statistical methods [2830] and the need to make nuanced judgments concerning the methodological choices they make.

Further complicating this matter is the problem that the vast majority of doctoral students in education begin with relatively inadequate preparation to tackle the problems of large and complex data [31]. Although many research students have significant experience in professional practice, they often encounter substantial hurdles in coming to terms with the logic and practice of quantitative research design and analysis [32]. Indeed, even getting consensus among professors of research methods as to essential methods for postgraduate study is difficult [32]. One solution to the diversity of method is to expose students to many methods rapidly with little time or support for developing a deep understanding or competence; this creates the “inch-deep, mile-wide” phenomenon in which real competence is not achieved.

Furthermore, the linkages between research design and analysis and real-world problems are not necessarily taught well within research methods courses. For too long, quantitative research methods have relied on lectures, textbooks, proofs, and a cook-book acquisition of computational skills. Current graduate coursework in research methods either focuses on introducing students to a multiplicity of methods with little time to develop deep understanding [33] or choosing a specific topic (e.g., regression analysis) and teaching it thoroughly to the exclusion of extensions or alternatives. These approaches can be off-putting and generally results in negative stance toward quantitative research [34]. Field's [35] new textbook in statistics shows some sensitivity to this problem by using a comic-book style illustrated science fiction narrative to embed the teaching of statistical logic into an emotionally engaging story line of a young man whose partner has left him. Indeed, a case could be made that the real curricular goals of quantitative methods can only be attained when students master computation and interpretation, have a positive attitude toward using statistics, and develop a clear understanding as to how and why statistics are needed. This can be called thrill, skill, and will [36].

Possible Solutions

A number of solutions to the “mile-wide, inch-deep” curriculum [37] seem feasible. Since research experts cannot be expert in all methods, it makes sense to share our expertise; though funding for this is difficult. A simple, but expensive, solution is the pooling of research methods instruction (e.g., the Bremen International Graduate School of Social Sciences, https://www.bigsss-bremen.de/) or the funding of research training by central government agencies (e.g., Leibniz Education Research Network, https://www.leibniz-gemeinschaft.de/en/research/leibniz-research-alliances/education-research/).

Much cheaper is the use of self-directed instruction, using online tools (e.g., Geoffrey Hubona's suite of R programming courses on Udemy, https://www.udemy.com/comprcourse/#instructor-1; Daniel Soper's free statistics calculators, http://www.danielsoper.com/statcalc/default.aspx; the Campbell Collaboration's online effect size calculators, https://www.campbellcollaboration.org/escalc/html/EffectSizeCalculator-Home.php; and Bill Trochim's Research Methods Knowledge Base, http://www.socialresearchmethods.net/kb/index.php). These resources reduce the need to offer courses or teach students individually and exploits just-in-time and technology-based learning. Nonetheless, quality assurance and relevance of these resources needs to be established before students rush in. Indeed, independent students may invest considerable effort in self-directed learning which turns out to be both inefficient and inappropriate. Hence, the promise of technology to allow independent learning may be illusory and still requires interaction with peers or supervisors.

The automation of statistical analysis is currently feasible [38, 39]. Unsurprisingly, these implementations are either commercial or exist only in Python language code form, which is inaccessible to the content-domain expert who may be considerably weaker in statistics or coding. The development of an automated system with a “wizard”-like user-interface might be able to take advantage of machine learning algorithms and high speed computing to pre-analyse datasets in accordance with specified characteristics of the variables and analytic goals. Such a tool would reduce the burden of becoming expert in all analytic procedures and let users focus more on learning and applying appropriate discipline specific knowledge and theory.

An unsurprising critique of these suggestions is that they focus on the skill of quantitative research methods. What are the solutions for inspiring greater motivation (i.e., will) or excitement (i.e., thrill) among social science and education researchers to develop the skills our data require? The quality of the research environment and supervisory arrangements matter to ensuring that new researchers are “switched on” [40]. For example, working with real data that are relevant to the content discipline of students can be expected to raise motivation and interest. More senior academics who work as a mentor for junior researcher on real problems with the goal of co-authorship in a submitted manuscript seems to inspire and motivate, as well as equip, new researchers [41]. Supportive research environments include ensuring research units have at least five research active staff or post doctorates and at least 10 research students; adequate library and IT facilities; preparation programs that cover the diversity of research skills and knowledge; and the provision of access to seminars, conferences, presentations, and teaching/demonstrating experiences [40]. This requires moving away from the solitary academic to a team model which is more commonly seen in the pure sciences. If we wish students to enter a community of research scholars, then there must be a community into which they can be inducted, rather than relying solely on the idiosyncrasy of an individual scholar.

Unfortunately, this opinion piece does not come to a definitive conclusion as to what should be done to revisit both the curriculum and pedagogy of quantitative methods so that our graduates will be future capable [42]. Instead multiple suggestions have been made with little evidence as to their efficacy. These speculations do not disguise the fact that there is little clarity and consensus about what methods and tools students should be taught, what kinds of environments are more effective and long-lasting, or which pedagogical activities and process are most effective. This is a global problem. Perhaps an international Delphi study [43] of research method instruction experts would be a first step toward agreeing on how skill, will, and thrill in the education of doctoral students in the quantitative social sciences can be achieved.

Author Contributions

The author confirms being the sole contributor of this work and approved it for publication.

Conflict of Interest Statement

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

1. Hodis F, Hancock GR. Introduction to the special issue: advances in quantitative methods to further research in education and educational psychology. Educ Psychol. (2016) 51:301–4. doi: 10.1080/00461520.2016.1208750

CrossRef Full Text | Google Scholar

2. Berliner DC. Educational research: the hardest science of all. Educ Res. (2002) 31:18–20. doi: 10.3102/0013189X031008018

CrossRef Full Text | Google Scholar

3. Phillips DC. Research in the hard sciences, and in very hard “Softer” domains. Educ Res. (2014) 43:9–11. doi: 10.3102/0013189X13520293

CrossRef Full Text | Google Scholar

4. Wieman CE. The similarities between research in education and research in the hard sciences. Educ Res. (2014) 43:12–4. doi: 10.3102/0013189X13520294

CrossRef Full Text | Google Scholar

5. Strauss A, Corbin J. Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory. London: Sage Publications (1998).

Google Scholar

6. Lincoln YS, Denzin NK. Epilogue: the eight and ninth moments-Qualitative research in/and the fractured future. In: Denzin NK, Lincoln YS, editors. The SAGE Handbook of Qualitative Research, 3rd Edn. Thousand Oaks, CA: SAGE (2005). p. 1115–26.

7. Green JL, Camilli G, Elmore PB. (eds.). Handbook of Complementary Methods in Educational Research. Mahwah, NJ: Lawrence Erlbaum Associates Inc. (2006).

8. Kincheloe JL, McLaren PL. Rethinking critical theory and qualitative research. In: Denzin NK, Lincoln YS, editors. Handbook of Qualitative Research. London: SAGE Publications (1994). p. 138–57.

Google Scholar

9. Chiu MM, Lehmann-Willenbrock N. Statistical discourse analysis: modeling sequences of individual actions during group interactions across time. Group Dyn Theory Res Pract. (2016) 20:242–58. doi: 10.1037/gdn0000048

CrossRef Full Text | Google Scholar

10. U. S. Department of Health and Human Services. Head Start Impact Study (Technical Report). Washington, DC: The Office of Planning Research and Evaluation Administration for Children and Families (2010).

11. Hibberts M, Johnson RB, Hudson K. Common survey sampling techniques. In: Gideon L, editor. Handbook of Survey Methodology for the Social Sciences. New York, NY: Springer (2012). p. 53–74.

Google Scholar

12. Raudenbush SW, Bryk AS. Hierarchical Linear Models: Applications and Data Analysis Methods, 2nd Edn. Thousand Oaks, CA: Sage (2002).

13. Little RJA, Rubin DB. Statistical Analysis with Missing Data, 2nd Edn. Hoboken, NJ: John Wiley & Sons (2002).

14. Fitzmaurice GM, Laird NM, Ware JW. Applied Longitudinal Analysis. Hoboken, NJ: Wiley (2004).

Google Scholar

15. Bollen KA, Curran PJ. Latent Curve Models: A Structural Equation Perspective. Hoboken, NJ: Wiley-Interscience (2006).

Google Scholar

16. Chen EE, Wojcik SP. A practical guide to big data research in psychology. Psychol Methods (2016) 21:458–74. doi: 10.1037/met0000111

PubMed Abstract | CrossRef Full Text | Google Scholar

17. McNeely CL, Hahm JO. The Big (Data) bang: policy, prospects, and challenges. Rev Policy Res. (2014) 31:304–10. doi: 10.1111/ropr.12082

CrossRef Full Text | Google Scholar

18. Song IY, Zhu Y. Big data and data science: what should we teach? Expert Syst. (2016) 33:364–73. doi: 10.1111/exsy.12130

CrossRef Full Text | Google Scholar

19. Daniel BK. (ed.). Big Data and Learning Analytics in Higher Education: Current Theory and Practice. Cham: Springer (2017).

PubMed Abstract | Google Scholar

20. Brown GTL. New Methods for New Methods: 2015 Interviews in Germany. Auckland, NZ: Figshare (2015). doi: 10.17608/k6.auckland.3843438.v2

CrossRef Full Text

21. Meissel KL. Quantitative Analysis Selection in Education: Potential Impacts on Researchers' Conclusions. Unpublished doctoral dissertation, The University of Auckland, Auckland (2014).

Google Scholar

22. Douglas HE. Scientific integrity in a politicized world. In: Schroeder-Heister P, Heinzmann G, Hodges W, Bour PE, editors. Logic, Methodology and Philosophy of Science-Proceedings of the 14th International Congress. London: College Publications (2014). p. 253–68.

Google Scholar

23. Bradley WJ, Schaefer KC. The Uses and Misuses of Data and Models: The Mathematization of the Human Sciences. Thousand Oaks, CA: SAGE (1998).

Google Scholar

24. Borsboom D. Measuring the Mind: Conceptual Issues in Contemporary Psychometrics. Cambridge: Cambridge University Press (2005).

Google Scholar

25. Michell J. Measurement in Psychology: A Critical History of a Methodological Concept. New York, NY: Cambridge University Press (1999).

Google Scholar

26. Borsboom D, Mellenbergh GJ. Why psychometrics is not pathological: a comment on Michell. Theory Psychol. (2004) 14:105–20. doi: 10.1177/0959354304040200

CrossRef Full Text | Google Scholar

27. Humphry SM. Psychological measurement: theory, paradoxes, and prototypes. Theory Psychol. (2017) 27:407–18. doi: 10.1177/0959354317699099

CrossRef Full Text | Google Scholar

28. Popper K. Unended Quest: An Intellectual Autobiography. London: Routledge (1992).

Google Scholar

29. Chalmers AF. What Is This Thing Called Science? Brisbane, QLD: University of Queensland Press (1999).

Google Scholar

30. Manicas PT. A Realist Philosophy of Social Science: Explanation and Understanding. Cambridge: Cambridge University Press (2006).

Google Scholar

31. Eisenhart M, DeHaan RL. Doctoral preparation of scientifically based education researchers. Educ Res. (2005) 34:3–13. doi: 10.3102/0013189X034004003

CrossRef Full Text | Google Scholar

32. Brown GTL. What supervisors expect of education masters students before they engage in supervised research: a Delphi study. Int J Quan Res Educ. (2014) 2:69–88. doi: 10.1504/IJQRE.2014.060983

CrossRef Full Text | Google Scholar

33. Brown GTL. Investigations into the Research Preparation of Masters Students for Independent Study (RPIS). Final Report to the Research Methods Preparation in the Faculty of Education Project. University of Auckland, Faculty of Education, Auckland (2007). doi: 10.17608/k6.auckland.3843483.v2

CrossRef Full Text | Google Scholar

34. Steele JM, Rawls GJ. Quantitative research attitudes and research training perceptions among Master's-level students. Couns Educ Supervision (2015) 54:134–46. doi: 10.1002/ceas.12010

CrossRef Full Text | Google Scholar

35. Field A. An Adventure in Statistics: The Reality Enigma. London: Sage (2016).

Google Scholar

36. Hattie JAC, Donoghue GM. Learning strategies: a synthesis and conceptual model. NPJ Sci Learn. (2016). 1:16013. doi: 10.1038/npjscilearn.2016.13

CrossRef Full Text | Google Scholar

37. Brown GTL, Moschner B. Preparing scientific researchers: problems facing research methods instruction. In: Paper Presented at the Biennial Meeting of the European Association for Research in Learning and Instruction. Limassol (2015). doi: 10.17608/k6.auckland.3843456.v1

CrossRef Full Text

38. KDnuggets. Automated Data Science Data Mining (2017). Available online at: http://www.kdnuggets.com/software/automated-data-science.html (Accessed February 10, 2017).

39. Slater S, Joksimović S, Kovanovic V, Baker RS, and Gasevic D. Tools for educational data mining. J Educ Behav Stat. (2017). 42:85–106. doi: 10.3102/1076998616666808

CrossRef Full Text | Google Scholar

40. Metcalfe J, Thompson Q, Green H. Improving Standards in Postgraduate Research Degree Programmes. London: UK Council for Graduate Education, Centre of Excellence (2002).

Google Scholar

41. Pellegrino JW, Goldman SR. Be careful what you wish for–you may get it: educational research in the spotlight. Educ Res. (2002). 31:15–7. doi: 10.3102/0013189X031008015

CrossRef Full Text | Google Scholar

42. González-Ocampo G, Kiley M, Lopes A, Malcolm J, Menezes I, Morais R, et al. The curriculum question in doctoral education. Frontline Learn Res. (2015). 3:19–34. doi: 10.14786/flr.v3i3.191

CrossRef Full Text

43. Linstone HA, Turoff M. (eds.). The Delphi Method: Techniques and Applications. Reading: Addison-Wesley (1975).

Keywords: quantitative research methods, curriculum design, pedagogy, doctoral education, advanced statistical methods

Citation: Brown GTL (2017) Doctoral Education in Quantitative Research Methods: Some Thoughts about Preparing Future Scholars. Front. Appl. Math. Stat. 3:25. doi: 10.3389/fams.2017.00025

Received: 07 September 2017; Accepted: 13 November 2017;
Published: 24 November 2017.

Edited by:

Elisa Pedroli, Istituto Auxologico Italiano (IRCCS), Italy

Reviewed by:

Femke L. Truijens, Ghent University, Belgium

Copyright © 2017 Brown. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Gavin T. L. Brown, gt.brown@auckland.ac.nz

Download