Skip to main content

METHODS article

Front. Educ., 21 September 2023
Sec. Assessment, Testing and Applied Measurement
Volume 8 - 2023 | https://doi.org/10.3389/feduc.2023.1169938

Using version control to document genuine effort in written assignments: a protocol with examples for universities

  • Precision Mental Health Lab, Discipline of Psychology, School of Law and Social Sciences, The University of the South Pacific, Suva, Fiji

This conceptual report describes a novel assessment technique to evaluate written assignments in universities, such as literature reviews, essays or research proposals, over the course of the academic semester in multiple milestones. The method can be adapted to undergraduate and graduate courses in disciplines with a writing requirement. Key features of this method include encouraging student self-regulation and spacing of learning, rapid scoring using quantitative elements, improving the authenticity and transparency of the written work, and guiding students towards better writing and thinking skills. The method involves the implementation of a system called version control, which is a class of software to track and manage changes to textual data, often used by programmers to track code. This report describes a use case in two psychology courses describing the logistics and marking dynamics surrounding the assessment. This protocol has been seeded on a public repository on GitHub where educators can contribute and develop the technique further.

1. Introduction

A written assignment is a type of context-free assessment tool that promotes processing information rather than reproducing it. It encourages students to draw on facts to organise and eliminate information as they formulate an argument (Schuwirth and van der Vleuten, 2004). A written assignment is a career realistic assessment tool because it is not testing knowledge of facts, which is rarely the end goal of a professional role, but rather the possession of facts and their comprehension in the service of achieving another end goal.

Despite the value of written assignments in education and in preparing students for their professions, they are cumbersome to evaluate: they are time-consuming for faculty to grade, students tend to write it last minute (Green-Lehrman, 2015), and students almost never read the feedback fully (Bouwer and Dirkx, 2023). This trend has only worsened since online education became mainstream (Smith, 2019; Comas-Forgas et al., 2021).

Cheating on written assignments has also amplified by tailored online services (Noorbehbahani1 et al., 2022; Yazici et al., 2022), though cheating has been a problem for as long as written assignments have existed (Woods, 1933; Murdock, 2016). In 2023, ChatGPT (OpenAI, San Francisco, USA) became the most well-known aid for written assignments in universities (Dwivedi et al., 2023; Sullivan et al., 2023). While ChatGPT is not primarily a cheating tool, e-cheating (i.e., using information technology in the process of cheating) has become a distinct market where unaware, unassuming or naïve students are targeted through advertising (Dawson, 2021). Services such as MyPaperHelp (Devellux Inc., Delaware, USA), EduBirdie (I3 Technology, Burgas, Bulgaria) and UKWritings (London, United Kingdom) are used around the world. Poorly cited or copy pasted violations are possible to catch using expensive services such as turnitin (Oakland, CA), but violations involving writers-for-hire or in some cases, the students' parents, are not only difficult to detect but challenging to prove in university academic violation proceedings (Campbell et al., 2000; Rogerson, 2017; Bretag et al., 2018; Smith, 2019). In my opinion, catching AI-cheating (e.g., ChatGPT) with counter-AI (e.g., turnitin) is a cat-and-mouse game with no end in-sight. In my experience at several universities, the rising incidence of academic integrity violations has notably increased the workload of the academic and administrative staff tasked with managing such conduct (also see: Berry, 2021; Friesen, 2023).

There is also a conceptual problem with written assessments: the final output is assessed, but not its manner of execution (Graham and Sandmel, 2014). Whether the paper was written in one night, or reflected and researched upon for several weeks is not checked directly, and the instructor's grading technique may not always be granular enough to delineate between a well written versus a well researched paper. There are two issues: (1) learning is not spaced-out despite evidence that enduring retention is achieved through substantial gaps in practice (gaps of months to achieve knowledge lasting years) (Cepeda et al., 2008); and (2) learning is not self-regulated despite its importance in achievement for university students (Cheng et al., 2013; Yot-Domínguez and Carlos, 2017).

Scriven (1967) articulated the distinction between formative and summative assessments. Formative assessments provide feedback that can be used for revisions to improve subsequent work whereas summative assessments evaluate the overall effectiveness of the material. Written assessments marked at the time of submission typically lack such a formative component. Scriven (1967) further emphasized the importance of an impartial, logic-based and multi-method process that takes into consideration the entire context of learning rather than the narrow scope of objectives and goals. These features were particularly difficult to implement in written assessments. The proposed milestone based approach enables tracking any number of behavioural signatures that are objective, at an arm's length from the educator and integrate the temporal context of developing the paper.

This assessment is explicitly teaching mode agnostic, meaning that it can be implemented equally well in face-to-face, online, and hybrid mode courses. The COVID-19 pandemic forced educators to convert their courses to online mode on short notice and the subsequent uncertainty worldwide due to recurring waves of the virus, deteriorating climatic events, political instability, and unanswered questions about workload (hybrid, remote, etc) going forward creates demand for highly versatile assessment modes that can be adapted to many teaching models in the future.

This report does not describe a multi-draft evaluation method though it is compatible with iterative feedback based assessment forms (Eckstein et al., 2011). The method is comparable to the process writing approach (Graham and Sandmel, 2014). Process writing involves students engaging in cycles of planning, translating and reviewing their work over an extended period of time. Graham and Sandmel (2014) has echoed calls for improving the teaching of writing and this proposed method offers a quantitative and rapidly marked solution. This aim is not to promote the use of technology or any particular software, but software concepts are used to sequence the workflow and manage the workload. Finally, written assignment is used to refer to a variety of assessments used in universities, including essays, term papers, literature reviews, research proposals, and other forms of free writing exercises.

Here, I propose a novel assessment sequence for written assignments that involves a longitudinal assessment that breaks a written assignment into rapidly scored milestones (see Figure 1). The goal of this assessment sequence is to address the conceptual and practical problems of written assignment marking, change the cost-benefit analysis of hiring writers and minimize the rewards of cheating, and introduce the judicious and minimalist use of technology.

FIGURE 1
www.frontiersin.org

Figure 1. The top row is an illustration of drafts as they develop over time from day 0 (day the assignment is prescribed) to deadline (when the work is submitted). Below the drawing are timelines indicating the span of evaluation window for this (A) proposed assessment, (B) independently of milestone how much of the work can be inspected by the educator, and for comparison, (C) the span of evaluation window within the current convention of written assignment assessment.

1.1. Version control

A concept in software engineering is version control (also known as source or revision control), which refers to a class of systems that manage changes to collections of information such as documents or programming code. The system enables tracking changes to information over a period of time. Information is “checked out” and “checked in” by an author from a central repository and the version control system, keeps an authoritative master copy (see Figure 2). The system resembles our physical libraries with the exception that one cannot make changes to books that have been checked out. While the first version control system was developed in 1962 by IBM (Kemper and Oxley, 2012), it was not been widely adopted by educators in the sciences or humanities for the purpose of document development.

FIGURE 2
www.frontiersin.org

Figure 2. The version control concept is much like a physical library where books are checked out for reading, and then checked back into the library. Similarly, the version control server keeps the authoritative copy of a document for users to check out. Unlike with books from a library, information checked out from a version control repository can be changed or even deleted.

This system has been extended to allow multiple individuals to collaborate on a single document because the system keeps track of all working copies and manage merging of each contributor's copy into the master using a set of conflict resolution rules. Version control in education and particularly for project-based learning has been well documented (Milentijevic et al., 2008).

Over the past decade, version control has been implemented in common word processors such as Word (Microsoft, Redmond. USA), Docs (Alphabet, Mountain View, USA), Pages (Apple, Cupertino, USA) among others. These software applications even allow simultaneous collaboration on a document and enable monitoring its progression with a high degree of transparency to all authors and with character level precision. Some applications and platforms provide a snapshot of edits, or a “punch card” that visualises the activity pattern of the contributors. Therein lies the opportunity to judge the value of a written assignment not by the final product, but also by its style of execution. The protocol described here will use GitHub, a proprietary instance of the Git engine that provides a user friendly interface and other helpful features.

1.2. The marking platform

GitHub (Microsoft, Redmond, WA) is a combination of a community hub website and repository system for anyone wishing to publish or collaborate on any type of code or textual document. GitHub implements a version control protocol called Git. Git, written by Linus Torvalds in 2005, is a free and open source software that implements version control. GitHub has a mature set of features that has been in development since 2008, relatively bug-free and is free for educators to use in their classrooms.

GitHub is compliant with data protection laws in many countries around the world including the European Union's General Data Protection Regulation (GDPR), but nevertheless, some universities may restrict faculty from using third party services. There are many open-source alternatives that any university can self-host and maintain in-house such as gitea.io, slant.co and others. A self-hosted variant of Git offers even more exclusivity and cohesion from a student's perspective.

1.3. The written assignment evaluation system

While this is not an empirical paper and there is no intention of analysing success rates or student responses to the assessment method, I will share some qualitative observations in implementing the assignments in my regular teaching duties. Students' performance is not the focus of this paper and consequently no student data is shared.

This assessment was implemented at a publicly funded university in the South Pacific island nation of Fiji with students geographically dispersed in Fiji and on other small island nations across Oceania. Students were enrolled in their senior year of university studies, working towards their Bachelor's degrees in Psychology. No funding was requested to implement the software from the university or any other agency.

To utilise version control for the purpose of written assignment assessment, I propose three potential parameters: (1) time to first revision from the time the assignment was released, (2) overall time-span of revisions from day 0 to deadline, and (3) frequency of contributions over the time-span (Figure 3). Time to first revision is the date and time the student began working on their written assignment. For maximum score, this value should be as close to the time of releasing the written assignment instructions as possible. Time-span of revisions refers to the time between the first edit and the final product, which is usually the project due date. Finally, frequency of contributions would be the number of days the student logs on to edit the document.

FIGURE 3
www.frontiersin.org

Figure 3. The punch card concept implemented on GitHub.com. Shaded green dots indicate activity levels for a given day. Shade of green adds adds further granularity on activity levels. Each square refers to a day of the month indicated on the x-axis. The top punch card indicates a last-minute work pattern with most of the work done near the final deadline. The middle graph indicates a strong work ethic with the student logging in frequently and applying significant edits to the document throughout the semester. The bottom graph indicates a work pattern that is distributed throughout the semester but is somewhat weak in terms of work quality.

A student writing the paper the night before would produce a pattern in which the time-span of writing would be 1 day long, with a significant number of edits on the day prior to the deadline. A student who contracted out the work, or used ChatGPT to generate the entire text, would produce a time-span of edits over 1 day with just a single edit. On the other hand, a student who has begun working on their paper shortly after the release of the instructions and contributing to their draft each day, would produce a pattern with a long time-span of edits, and a high frequency of contributions. Given that revision control tracks each edit (Figure 4), this execution component of the written assignment score would be automatically calculated.

FIGURE 4
www.frontiersin.org

Figure 4. A text file created by a student indicating history of changes. The green highlight indicates the line and characters where additions have taken place. The green shaded square indicates the amount the document has been changed in relation to the total character count. This particular image depicts the creation of an blank file named master.md.

1.4. Extension to group work

Versioning can be extended to track contributions to a paper by each group member. One of the challenges in grading group work is scoring each group member's contributions fairly. Educators can rely on various proxies such as students ratings of each other, but these peer rating systems also have drawbacks. Revision control offers a quantified unambiguous method to assist in judging each group member's contributions. The concept of commits can be used to track how much a team member contributed to the project. Commits refer to both character additions and deletions so members who are editing a document or making changes to another member's additions also count towards activity levels. The number of commits on a given document is automatically tracked by most revision control systems and easy to view.

2. Sample assignments

Two sample assignments are included below. The first one contains instructions for a generic literature review paper that is appropriate for undergraduate students in the second or third year of studies. The second assignment instruction contains an example with an applied component prescribed to graduate students. This can include data collection, interview data or big data analysis. In the example below, the instructions include conducting an interview for an Advanced Cross-Cultural Psychology course.

The first assignment has been broken into four milestones whereas the second assignment into five milestones. The number of milestones can be adjusted, but the principle used in developing the four and five milestone system was to make it easily achievable for students and easily marked by the educator.

2.1. Undergraduate written assignment

2.1.1. Instructions

The goal of this literature review paper is to show how well you comprehend, summarise and critique scientific information about the theories covered in this course. The paper will be a roughly 2000–2500–word essay in which you review a minimum of two theories based on textbook readings, peer-reviewed publications, and evaluate the quality of the knowledge.

2.1.2. Topic selection

You will choose the topic of your written assignment based on topics covered in your required readings. You may refer to a variety of sources, including textbooks, other review papers, Wikipedia, but you will need to find 5 peer-reviewed empirical research articles connected to your topic.

An empirical research article (or also known as primary research) describes an experiment that the authors conducted themselves. This is different from a review article (secondary research) where others' research is summarised. You can include review articles, but they do not count towards the minimum 5 peer-reviewed primary research article count. Same goes for books. Wikipedia and other Internet sources are not to be used in your paper, but I encourage you to use them as ways to get acquainted with your topic, and to help understand what your paper will be about.

2.1.3. Assessment structure

You will be assessed on the (1) development of your review paper and the (2) final product (the quality of the actual paper). Assessment is broken down into 4 milestones and you must meet each milestone before you can progress to the next one within the deadlines given in the course outline. Failing to complete a milestone means failing the entire written assignment requirement.

2.1.4. Set up

1. Create an account on GitHub.com.

2. Submit your username in the assignment module.

3. Wait for an automated email from GitHub: You will receive an invitation to join a private repository.

4. Look around in the repository where you have been added.

2.1.5. Punchcard

GitHub uses a punchcard to show you activity levels while working on documents (Figure 3). The more bits of text you change in your document, the more activity will be shown on your punchcard (Figure 4). Activity levels on your punchcard will be used to determine your grade on this written assignment. Consequently, it is worthwhile to extend or space out your document editing to just a little bit each day, rather than to doing it in one shot.

2.1.6. Milestone 1

Weight: 2%, must complete to progress to the next milestone

1. Create a folder in the repository with the following structure: LastName_StudentID.

2. Create a new file within the repository and name is “master.md”.

a. Your paper will be in “markdown” extension. This is different from Microsoft Word or OpenOffice.

b. Here is a cheatsheet for creating headings and lists in markdown: markdownguide.org/cheat-sheet.

3. Create a References section in your master.md document and enter the APA formatted reference for 3 papers you will consider in your paper (don't worry, it's okay if you change your mind and swap these references to another).

2.1.7. Milestone 2

• Increase your reference count to 5. Swap papers as needed.

• Write a 1-paragraph summary for each of the papers in the references section.

a. The paragraph can be as short as 4-5 sentences in total. Try writing one sentence to describe each section of the paper (Intro, Methods, Results, Discussion).

b. These paragraphs can be simply written under the reference item in the References list (for now).

2.1.8. Milestone 3

• Add 2–3 paragraphs of your own critique of the papers that you have reviewed.

• The 5 references & summaries can remain the same.

2.1.9. Milestone 4

• Compose your paper:

a. Add a paragraph of introductory statements (broad interest, narrowing down the scope of your paper).

b. Move your paper paragraphs over to the main body. Expand & write sentences to connect them – form a narrative, guide the reader.

c. Move your critique to the end of the paper. Ensure good flow.

d. Format your paper according to APA 7.0 stylistic rules.

e. See the Essay Marking Criteria.

2.2. Advanced written assignment: cross-cultural psychology

2.2.1. Instructions

Cultural competence for a counsellor or clinical mental service provider consists of three components:

• Cultural awareness and beliefs: The provider is sensitive to her or his personal values and biases and how these may Influence perceptions of the client, the client's problem and the counselling relationship.

• Cultural knowledge: The counsellor has knowledge of the client's culture, worldview, and expectations for the counselling relationship.

• Cultural skills: The counsellor has the ability to intervene in a manner that is culturally sensitive and relevant.

Cultural knowledge is typically gained during the clinical assessment process, although learning about client cultural values and beliefs goes on throughout the treatment process. There are a variety of ways to become knowledgeable about clients culture, world view and expectations of counselling. An important method is to proactively create opportunities for clients to share their cultural perspective in a way that is consistent with clinical assessment. Asking direct and specific questions about culturally specific views is one way to learn.

In this assignment, you will interview one of your classmates who is of a different cultural background from your own. Set aside about 1 hour for each interview. If you can, meet face to face. You can use your phone or laptop's voice recorder to record the conversation. Transcribing will take about 4 h for an hour voice recording.

Your goal is to ask questions that tend to elicit responses that reflect cultural beliefs and attitudes. To guide your interview process, here are some guiding questions that you can use to start the conversation. The questions here are intended to provide a range of possible ways to assess various cultural beliefs, values and practices that can potentially impact treatment engagement and outcomes. You can ask any number of follow up questions as you see best fit.

2.2.2. Assessment structure

You will be assessed on the (1) development of your review paper and the (2) final product (the quality of the actual paper). Assessment is broken down into 4 milestones and you must meet each milestone before you can progress to the next one within the deadlines given in the course outline. Failing to complete a milestone means failing the entire written assignment requirement.

2.2.3. Set up

1. Create an account on GitHub.com.

2. Submit your username in the assignment module.

3. Wait for an automated email from GitHub: You will receive an invitation to join a private repository.

4. Look around in the repository where you have been added.

2.2.4. Punchcard

GitHub uses a punchcard to show you activity levels while working on documents (Figure 3). The more bits of text you change in your document, the more activity will be shown on your punchcard (Figure 4). Activity levels on your punchcard will be used to determine your grade on this written assignment. Consequently, it is worthwhile to extend or space out your document editing to just a little bit each day, rather than to doing it in one shot.

2.2.5. Milestone 1

Weight: 2%, must complete to progress to the next milestone

1. Create a folder in the repository with the following structure: LastName_StudentID.

2. Create a new file within the repository and name is “master.md”.

a. Your paper will be in “markdown” extension. This is different from Microsoft Word or OpenOffice.

b. Here is a cheatsheet for creating headings and lists in markdown: markdownguide.org/cheat-sheet.

3. Create a References section in your master.md document and enter the APA formatted reference for 3 papers you will consider in your paper (don't worry, it's okay if you change your mind and swap these references to another).

4. Identify who you will interview.

2.2.6. Milestone 2

Weight: 12%, must complete to progress to the next milestone

a. Conduct your interview.

b. Transcribe it into the document verbatim.

2.2.7. Milestone 3

Weight: 12%, must complete to progress to the next milestone

1. Add 2–3 paragraphs per topic area for:

a. Dimensions of culture

b. Identity

2.2.8. Milestone 4

Weight: 12%, must complete to progress to the next milestone

1. Add 2–3 paragraphs per topic area for:

a. Acculturation

b. Health

2.2.9. Milestone 5

Weight: 12%

1. Add 2–3 paragraphs per topic area for:

a. Language

b. Emotions

2. Compose your final paper, add general introduction and a concluding paragraph

2.3. Impressions on a small scale implementation

The assessment has been implemented in one undergraduate Cognitive Psychology course and one postgraduate Advanced Cross-Cultural Psychology course Oceania. The Moodle shell was configured with a separate assignment module for each milestone and an additional assignment module that describes the entire project. On week 1, students were asked to acknowledge their understanding of the entire project and to provision their accounts on GitHub. Once students created a profile, a teaching assistant collected their usernames and added them to a private organisational group that is not visible to the grand public.

Majority of students followed the instructions and navigated the technology successfully. A handful of students had issues submitting their username or responding to the notification email containing the invitation instructions. The use of a class-wide instant messenger enables these students receive near real-time help from their classmates and substantially reduces workload on the educator.

The user interface of the text editing application on GitHub is simpler relative to applications such as Word or Pages. Many features found in typical word processor software (e.g., styles, layouts, editor features, design etc) are absent. The paper is written in a plain text editor in MarkDown format (a lightweight markup language), which consists of a few simple rules to format the text. For example, typing a hashtag at the beginning of a line creates a heading or typing a star before and at the end of a phrase turns the text bold. A link to a quick reference guide was sufficient in getting students caught up with this writing style.

Each milestone was relatively easy to achieve, which translated into high completion rates. However, completion of each successive milestone depends on the preceding one, which creates the possibility of cascading issues for the student as deadlines pass by. Consequently, it is important to follow up with those students who miss a deadline, even if they cannot receive a mark on a missed milestone. This additional follow-up work with the student increases workload for the educator but may have additional positive effects on class engagement and nurtures a positive relationship with struggling students.

There were a few submissions of entire papers pasted into the editor a day before the final deadline. This represented a dilemma because the authenticity of the work could not be verified. Excuses presented included working in a different word processor and pasting it into GitHub on the last day, and in some cases the work appeared to be above the expected level. The marking scheme capped the maximum grade of these submissions to 60%, which is technically a passing grade, but has a suppressing effect for cheaters. The marking structure can be tuned for even stronger suppression of potential cheaters leading them to fail the assignment.

In both courses, the only quantitative parameter used for determining their mark was the time-span of edits. Specifically, students were given full marks for execution if they edited their document over 5 days in a 10 day time-span leading up to the deadline. Many students produced a compressed execution style for earlier milestones but they adjusted their work pattern for later milestones. Some students missed two milestones before they learned to follow the execution style requirement of spacing their work out. In sum, students expressed some anxiety over the assessment before they started working on it, but those anxieties subsided once they completed the first milestone. More importantly, their work could be trusted to be authentic and the evaluation discriminated quantitatively on work ethic.

3. Discussion

The written assignment assessment method described here has a number of properties that stand out relative to conventional marking schemes: it assesses writing execution over the entire writing cycle (Graham and Sandmel, 2014); it promotes the spacing-effect of learning (Ebbinghaus, 1885; Dunlosky et al., 2013) by encouraging frequent editing of the paper and reflection on the subject matter over an extended period of time. It is cheating resistant because every edit is tracked and accessible to the instructor and it is neither affordable nor practical to hire a writer for the entire duration of the paper (Murdock, 2016; Dawson, 2021); and it introduces quantitative elements to written assignment metrics while leaving room for qualitative assessment as well.

This approach does have some elements to consider. Because the assessment has multiple deadlines, it requires more careful planning at the outset of the semester to ensure alignment with related content and to avoid conflict with other types of assessments. Using Git requires more advanced knowledge of version control, and this means yet-another-technology to master and offer support to students. Some of the technical know-how can be outsourced if the university's Information Technology unit can offer support. Finally, more deadlines and more complexity leads to more variety of glitches and errors and potentially an increased volume of student inquiries. This too, if adopted institution-wide, would eventually fade away as students become acquainted with the system across more of their courses.

The sample assignments have been made available openly on GitHub. They can be downloaded or forked freely. Pushing updates back into the main branch is also welcome.

Data availability statement

The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: https://github.com/llorban/term.papers.

Author contributions

The author confirms being the sole contributor of this work and has approved it for publication.

Funding

Publication fees were covered by a grant from the University of the South Pacific (# FE034-FAL15).

Acknowledgments

I am grateful for the insightful conversations with Drs. Aman Bassi, Gira Bhatt, and Farhad Dastur. Feedback from Dr. Micah Amd has been instrumental in publishing this manuscript.

Conflict of interest

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Berry, C. (2021). Record Number of Plagiarism, Cheating Incidents at Tru Last Year. Available online at: Infonews.ca (accessed July 28, 2023).

Google Scholar

Bouwer, R., and Dirkx, K. (2023). The eye-mind of processing written feedback: Unraveling how students read and use feedback for revision. Learn. Instruct. 85, 101745. doi: 10.1016/j.learninstruc.2023.101745

CrossRef Full Text | Google Scholar

Bretag, T., Harper, R., Burton, M., Ellis, C., Newton, P., Rozenberg, P., et al. (2018). Contract cheating: a survey of australian university students. Stud. Higher Educ. 2018, 1–20. doi: 10.1080/03075079.2018.1462788

CrossRef Full Text | Google Scholar

Campbell, C. R., Swift, C. O., and Luther, D. (2000). Cheating goes hi-tech: online term paper mills. J. Manage. Educ. 24, 6. doi: 10.1177/105256290002400605

CrossRef Full Text | Google Scholar

Cepeda, N. J., Vul, E., Wixted, J. T., Pashler, H., and Rohrer, D. (2008). Spacing effects in learning: a temporal ridgeline of optimal retention. Psychol. Sci. 11, 1095–1102. doi: 10.1111/j.1467-9280.2008.02209.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Cheng, K.-H., Liang, J.-C., and Tsai, C.-C. (2013). University students' online academic help seeking: The role of self-regulation and information commitments. Int. Higher Educ. 16, 70–77. doi: 10.1016/j.iheduc.2012.02.002

CrossRef Full Text | Google Scholar

Comas-Forgas, R., Lancaster, T., Calvo-Sastre, A., and Jaume, S.-N. (2021). Exam cheating and academic integrity breaches during the covid-19 pandemic: an analysis of internet search activity in spain. 7:e08233. doi: 10.1016/j.heliyon.2021.e08233

PubMed Abstract | CrossRef Full Text | Google Scholar

Dawson, P. (2021). Defending Assessment Security in a Digital World. London: Routledge. doi: 10.4324/9780429324178

CrossRef Full Text | Google Scholar

Dunlosky, J., Rawson, K. A., Marsh, E. J., Nathan, M. J., and Willingham, D. T. (2013). Improving students' learning with effective learning techniques: Promising directions from cognitive and educational psychology. Psychol. Sci. Public Inter. 14, 3266. doi: 10.1177/1529100612453266

PubMed Abstract | CrossRef Full Text | Google Scholar

Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kumar, A., et al. (2023). Opinion paper: “so what if chatgpt wrote it?” multidisciplinary perspectives on opportunities, challenges and implications of generative conversational ai for research, practice and policy. Int. J. Infor. Manage. 71, 102642. doi: 10.1016/j.ijinfomgt.2023.102642

CrossRef Full Text | Google Scholar

Ebbinghaus, H. (1885). “Uber das Gedachtnis: Untersuchungen zur experimentellen Psychologie”. Berlin: Duncker & Humblot.

Google Scholar

Eckstein, G., Chariton, J., and McCollum, R. M. (2011). Multi-draft composing: An iterative model for academic argument writing. J. English Academic Purp. 10, 162–172. doi: 10.1016/j.jeap.2011.05.004

CrossRef Full Text | Google Scholar

Friesen, J. (2023). Hired Exam-Takers, Blackmail and the Rise of Contract Cheating at Canadian Universities. Toronto, ON: The Globe and Mail.

Google Scholar

Graham, S., and Sandmel, K. (2014). The process writing approach: a meta-analysis. J. Educ. Res. 104, 396–407. doi: 10.1080/00220671.2010.488703

CrossRef Full Text | Google Scholar

Green-Lehrman, H. (2015). The Early Bird Gets the Grade: How Procrastination Affects Student Scores. New York: Knewton.

Google Scholar

Kemper, C., and Oxley, I. (2012). “In the beginning there were just files,” in eds C. Kemper, and I. Oxley, Foundation Version Control for Web Developers. Cham: Springer Nature, 25–32.

Google Scholar

Milentijevic, I., Ciric, V., and Vojinovic, O. (2008). Version control in project-based learning. Comp. Educ. 50, 1331–1338. doi: 10.1016/j.compedu.2006.12.010

CrossRef Full Text | Google Scholar

Murdock, T. B., Stephen, J. M., and Groteweil, M. M. (2016). “Student dishonesty in the face of assessment: Who, why, and what we can do about it,” in eds Handbook of Human and Social Conditions in Assessment. London: Routledge.

Google Scholar

Noorbehbahani,1, F., Mohammadi,1, A., and Aminazadeh, M. (2022). A systematic review of research on cheating in online exams from 2010 to 2021. Educ. Inform. Technol. 7, 8413–8460. doi: 10.1007/s10639-022-10927-7

PubMed Abstract | CrossRef Full Text | Google Scholar

Rogerson, A. M. (2017). Detecting contract cheating in essay and report submissions: process, patterns, clues and conversations. Int. J. Educ. Integ. 13, 10. doi: 10.1007/s40979-017-0021-6

CrossRef Full Text | Google Scholar

Schuwirth, L. W. T., and van der Vleuten, C. P. M. (2004). Different written assessment methods: what can be said about their strengths and weaknesses? Med. Educ. 38, 974–979. doi: 10.1111/j.1365-2929.2004.01916.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Scriven, M. (1967). The Methodology of Evaluation. Chicago, IL: Rand McNally.

Google Scholar

Smith, T. (2019). Buying College Essays is Now Easier Than Ever, But Buyer Beware. Washington, D.C.: National Public Radio.

Google Scholar

Sullivan, M., Kelly, A., and Mclaughlan, P. (2023). Chatgpt in higher education: Considerations for academic integrity and student learning. J. Appl. Learn. Teach. 6, 17. doi: 10.37074/jalt.2023.6.1.17

CrossRef Full Text | Google Scholar

Woods, R. C. (1933). The term paper; its values and dangers. Peabody J. Educ. 11, 87–89. doi: 10.1080/01619563309535181

CrossRef Full Text | Google Scholar

Yazici, S., Durak, H. Y., Dünya, B. A., and Şentürk, B. (2022). Online versus face-to-face cheating: the prevalence of cheating behaviours during the pandemic compared to the pre-pandemic among turkish university students. J. Comp. Assis. Learn. 39, 1–24. doi: 10.1111/jcal.12743

CrossRef Full Text | Google Scholar

Yot-Domínguez, C., and Carlos, M. (2017). University students' self-regulated learning using digital technologies. Int. J. Educ. Technol. Higher Educ. 38, 8. doi: 10.1186/s41239-017-0076-8

PubMed Abstract | CrossRef Full Text | Google Scholar

Keywords: written assignment, version control, assessment methods, Git, higher education

Citation: Orbán LL (2023) Using version control to document genuine effort in written assignments: a protocol with examples for universities. Front. Educ. 8:1169938. doi: 10.3389/feduc.2023.1169938

Received: 20 February 2023; Accepted: 31 August 2023;
Published: 21 September 2023.

Edited by:

Gavin T. L. Brown, The University of Auckland, New Zealand

Reviewed by:

Milan Kubiatko, J. E. Purkyne University, Czechia
Lee Mackenzie, Liverpool Hope University, United Kingdom

Copyright © 2023 Orbán. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Levente L. Orbán, levente.orban@usp.ac.fj

Download