Skip to main content

CURRICULUM, INSTRUCTION, AND PEDAGOGY article

Front. Educ., 19 October 2021
Sec. Digital Education
Volume 6 - 2021 | https://doi.org/10.3389/feduc.2021.763203

Multimodal Video-Feedback: A Promising way of Giving Feedback on Student Research

  • 1Department of Education, University of Vienna, Vienna, Austria
  • 2Centre for Teacher Education, University of Vienna, Vienna, Austria

Feedback is a valuable pedagogical tool to guide students through research projects and aid the acquisition of methodological knowledge. However, its potential is rarely exhausted. In this article, we describe one digital pedagogical solution to improve feedback practices in higher education: multimodal video-feedback. After showing the conceptually different process and outcomes of this technique relative to more traditional ways of giving feedback, we provide first empirical quantitative and qualitative evidence about its usefulness based on 77 course evaluations. We then discuss avenues for further research and how the practice itself could be developed and tailored to the specific needs to individual lecturers.

Introduction

Feedback is considered to be a main source of learning and a key aspect of teaching (cf. Poulos & Mahony, 2008; Rowe & Wood, 2008). Studies show that there is a positive correlation between teacher’s feedback and student´s performance (Bijami et al., 2016). It helps students to evaluate their own learning process and identify gaps regarding their learning (Cavalcanti et al., 2020), which results in better student achievement. This is especially true for the quite fuzzy process of acquiring methodological knowledge and competences, since these build on complex mathematical proficiency, require general knowledge with respect to scientific conduct (cf. Owen, 2016), and methods anxiety may interfere with this (Bernstein & Allen, 2013; Onwuegbuzie & Wilson, 2000). Providing learners with an opportunity to practically engage in research and to reflect upon research methods and one’s own understanding of research play important roles in the acquisition of method expertise (Lewthwaite & Nind, 2016).

However, good quality feedback is not always guaranteed and its potential is rarely thoroughly exhausted (Ajjawi et al., 2021; Bienstock et al., 2007; Orsmond & Merry, 2011; Rand, 2017). Especially in the context of distance learning and online teaching, providing good quality feedback becomes a challenge for teachers (Cavalcanti et al., 2020), because the weaknesses of the more traditional format of written feedback seem to become more explicit and feedback in its ideal as a continuing two-way communication (Dowden et al., 2013) is more likely to fail. This might be due to negative emotions that are involved with feedback for teachers as well as for students. Due to the perception of feedback as being (potentially) harsh criticism as well as due to miscommunication, students experience dissatisfaction, disappointment, and frustration, resulting in what teachers often note as students’ disengagement. This may also be described as the inability to properly understand and utilize teacher’s feedback as a consequence of the emotional response (Mahfoodh, 2017). Respectively, teachers feel disappointed and angry (Yu et al., 2021), since giving feedback is often perceived as a demanding activity, that is “difficult, tense, and time-consuming” (Mahfoodh, 2017, p. 53). Based on these remarks, feedback proves to be an area for much needed improvement.

In this article, we describe one digital pedagogical approach to this challenge which we, in reference to Hung (2016), call multimodal video-feedback and which has been applied very successfully in the past, including the less digital, pre-COVID-19 years. The digital character of this practice of giving feedback is especially useful for distance learning and hybrid education. More details, as well as a more complete definition of what we mean by multimodal video-feedback, will be given in the next section. This conceptual part, which clarifies the concept of video-feedback, is the major contribution that this article seeks to make. We then provide quantitative and qualitative data to understand the usefulness of this approach. Last, it is important to note that in an effort to mix perspectives, this article is co-authored by a lecturer (feedback-giver) and a Master student (feedback-receiver).

Pedagogical Framework

The Feedback Process

Multimodal video-feedback is a multimedia technique of giving feedback on manifest work products (e.g., written assignments, pictures, etc., which we now call feedback objects) in educational settings. It is both a process (giving feedback to learners) and a product (the actual feedback that the learner sees in a video format). By focusing on manifest work products, we exclude, for example, synchronous presentations as objects of feedback (although the multimodal video-feedback process could be used here, too, after some minor adaptations). Multimedia refers to the use of a minimum of two channels; at the very least it includes a display of the feedback object as a visual (through a screencast) and the narration of the feedback as an auditory channel. This may be supplemented with a camera-recording of the feedback-giver to facilitate the communication process and increase the feeling of personal connection (see below).

How is this different from the process and product of feedback when feedback is given through textual response? For the sake of simplicity, we will just refer to textual feedback at this point. However, the workflow does not change much for other formats, for example if the feedback is verbally delivered in a (often resource-intensive) meeting with the learners. Dissecting the process of educational feedback-giving, we can discern the four broad steps of.

1. Inspecting the feedback object,

2. Spotting topics to give feedback about,

3. Prioritizing and filtering these topics for writing it down, and

4. Disseminating the feedback.

According to Hattie and Timperley (2007), effective feedback answers three questions: “What are the goals”, “What progress is being made toward the goal”, and “What activities need to be undertaken to make better progress?” (p. 86). Feedback is always given along these questions on four levels: the performance on the task or product, the process performance, that is, the underlying comprehension of the process needed to perform the task, the self-regulation of actions, and the self (Hattie & Timperley, 2007). Note that in the process of written feedback, steps one to three are invisible from the perspective of the learner. This is different with the multimodal video-feedback process that we propose here, since it is fully transparent to the learner. Here, the feedback-giver shares the feedback object visually (as viewed by the feedback-giver) and thinks aloud while inspecting the feedback object. The resultant video (which may or may not be enriched also with a camera recording of the narration to increase the personal touch) is then sent to the learner instead of any textual comments. This means that steps one to four as outlined above are integrated in one single step; the process is the output. Figure 1 illustrates the differences between these two approaches. The process is demonstrated in the video by Froehlich (2021).

FIGURE 1
www.frontiersin.org

FIGURE 1. Process workflow of video-feedback compared to textual feedback.

Feedback Outcomes

These conceptually different workflows lead to different outcomes for learners and feedback-givers. What follows is a list of hypothesized outcomes that will then be tested by collecting empirical data about the learners’ perspectives.

Outcomes for the receivers of multimodal video-feedback

We argue that multimodal video-feedback entails many benefits for learners of research methods, especially in terms of.

• Creating personal connection,

• Explicating the scientific process,

• Increasing feedback density, and

• Clarifying communication.

The multimodal nature of the video-feedback discussed here makes it easier for the teacher to connect to the students on various levels compared to text-only communication. We propose that the verbal and visual components of multimodal video-feedback create a more personal feeling of feedback compared to solely written formats. Including mimics, gestures, and tone, communication happens on multiple levels. The tone of feedback can be encouraging and thus serve for reward as well as for motivation to continue the hard work and strive to improve (Leibold & Schwarz, 2015). Leibold and Schwarz (2015) consider addressing students by name as one key component for best practice in online feedback—and, likewise, video-feedback could be seen in the context of creating a very individualized product for the student. With video-feedback, it is very clear that an effort has been made to focus on the one students’ work and that no templates or feedback boilerplates have been used [which is a (questionable) feature of many other digital feedback processes].

Multimodal video-feedback also explicates the scientific process and the reading process of the feedback-giver. For the sake of enhancing the learning process and accordingly further performance of feedback receivers (Bijami et al., 2016), we consider that the verbal elements combined with visuals in multimodal video-feedback are able to enhance insights into the perception of the feedback object by the feedback-giver as well as the approach taken to make sense of the task/product. This can be related to the idea of “decoding the disciplines” (Middendorf & Pace, 2004), where the expert’s way of doing the work is being explicated. Next to commenting on the correct application of research methods in a given context, the feedback-giver also explicates the process of how the correctness is being checked and evaluated, how parts of a research project are connected with each other (e.g., how the measurement affects the interpretation of an analysis), etc. Hence, the receiver can reconstruct the process that leads to the comments and observation on the learners’ work, allowing for a thorough comprehension of the feedback. Following this process is not feasible in written feedback, as process and output are separate and only the final version of the feedback is usually shared (see Figure 1).

Multimodal video-feedback may increase feedback density. Burke`s (2009) analysis on the feedback use of 350 students in humanities-related disciplines showed a great dissatisfaction of the feedback receivers with written feedback due to its shortage. Because written feedback is very brief, it tends to stay superficial and not go into depth. However, detailed feedback is necessary for good quality feedback (Leibold & Schwarz, 2015). We assume that feedback-givers make similar number of observations irrespective of the medium of choice. However, as outlined above, many more steps are needed to come from the observations to a textual format sharable with the learners. This is likely to reduce the number of points for feedback that are eventually shared with the learner. Since the associated processes of filtering and prioritizing are not needed with multimodal video-feedback, we infer that a greater number of observations are being shared with the learners.

For the feedback receivers being able to comprehend the information they receive via verbal feedback adequately and productively, it is important to preempt misunderstandings as much as possible (Rand, 2017). This requires the communication between giver and receiver to be as precise and clear as possible. Multimodal video-feedback features visual and auditory channels of communication, the feedback is very contextualized (the learner sees what the feedback-giver is seeing in any moment), and the learner has substantial control over how the feedback is being consumed. For example, the learner can adjust the speed, rewatch parts, pause, etc. (Froehlich & Winter 2019). We hypothesize that this makes successful communication more likely and mitigates miscommunications, which due to difficulties regarding understanding what is being said and referred to (Cavalcanti et al., 2020), often accompany more traditional modalities of feedback like written formats (Mahfoodh, 2017). If text is important for the learners, captions can easily be generated and delivered alongside the video (e.g., to attend to special needs of learners or to just communicate on one additional channel for enhanced clarity).

Outcomes for the Feedback-Givers

We argue that multimodal video-feedback also has benefits for the feedback-givers in terms of 1) creating awareness about the resources needed to give feedback, 2) increased transparency, 3) the ability to recycle specific feedback for further use, and 4) a general reduction in the time needed to produce the feedback.

Feedback is not only considered an important means of learning by researchers (cf. Poulos & Mahony, 2008; Rowe & Wood, 2008; Frieling & Froehlich, 2017; Froehlich et al., 2017), it is also frequently demanded by the learners themselves. This, however, can also be a point of friction, as the (time) resources that are needed to produce feedback may be underestimated by learners. After all, in the “traditional” process of giving feedback, the learners might only see a few lines of text, and making an inference about the actual time needed to write these lines is a difficult task. As described above, with multimodal video-feedback the process of giving feedback equals the product that is ultimately shared with the learners. There cannot be a question about how many resources were needed to give the feedback and the hope is that this translates into more realistic expectations also for the feedback-givers.

This first point can also be related to a general call about more transparency in teaching. Transparency in teaching is no new request (Anderson et al., 2013). We consider that the verbal elements combined with visuals in multimodal video-feedback can enhance insights into the feedback-giver’s perception of the feedback object as well as the approach taken to make sense of it. Hence, the receiver is able to reconstruct the process that leads to the comments and observation on the feedback object, allowing for a thorough comprehension of the feedback. For example, it is easier to recognize if the feedback-giver has a misunderstanding. Following this process is not feasible in written feedback.

In the traditional feedback-giving process, the object of feedback and the feedback itself are separated from each other. The feedback cannot easily be shared in meaningful ways without also sharing the object of feedback. Multimodal video-feedback is self-contained; any information that is needed to make sense of the feedback given is available in the multimedia file. This also allows for an easier dissemination to other interested parties (if not restricted by other reasons, such as privacy concerns). There are many potential applications where this may be useful; for example, when enriching methodological concepts in a method textbook with concrete feedback sequences of what can go wrong here, or when editing videos in order to group similar feedback together, so that the nuances of a specific point can be understood more precisely. These resources, which should not need a lot of extra work to be created, can also be productively used in flipped learning scenarios (cf. Reidsema et al., 2017; Talbert & Bergmann, 2017; Froehlich, 2018).

Last, but potentially very important for many higher education professionals, we hypothesize that video-feedback takes less time in the production than traditional feedback. This argument is mainly based on the comparison of the two processes outlined above in Figure 1; the process for multimodal video-feedback contains fewer and less time-consuming steps. However, this of course is highly dependent on the feedback-giver; especially for feedback-givers new to the practice of multimodal video-feedback, there is a learning process that needs to be considered (e.g., getting everything set up from a technical point of view, developing “scripts” so that the feedback can be delivered both spontaneously and fluently, etc.). For example, in the demonstration video by Froehlich (2021), there are four phases to be distinguished:

1. Greeting and introduction that not only connects to the student but that previews what is going to come;

2. Going through the whole document and thinking aloud what potential improvements could be;

3. Summarizing and prioritizing the main points while going over the full document again rapidly.

Learning Environment

The technique of video-feedback is applicable quite generically to educational contexts and beyond (e.g., the first author regularly uses the same technique to stimulate informal workplace learning; cf. AUTHOR1 et al., 2015; AUTHOR1 et al., 2017). Next to using it as a replacement for more traditional feedback techniques, it also functions very well in more open course formats, such as research internships (Froehlich et al., 2021). There, the personal connection that is (assumed to be) created becomes even more important.

The database used to assess the practice of video-feedback (see next section) includes full and part-time students of social science research methods courses of various types (research internships, master thesis supervision, methods courses) at undergraduate and graduate levels. The video-feedback was directed at various feedback objects, including research synopses, measurement instruments, analysis outputs and interpretations, and full drafts of research reports.

Since this is a digital technique, a certain level of digital infrastructure on the side of both feedback-givers and learners is needed (microphone, camera, screen recording software, data storage). However, none of these requirements seems uncommon for modern teaching.

Assessment

Sample

In this article, we offer a preliminary assessment and evaluation of multimodal video-feedback based on quantitative and qualitative data collected as part of course evaluations in higher education learning environments. Here, we display the data of 77 course evaluations of three institutions of higher education in Austria that received multimodal video-feedback from the first author during their regular studies.

Instruments

Video-feedback was evaluated in absolute terms, that is, whether it was associated with the attributes informative, timesaving, personal, and instructive on a five-point answer scale (“1” = do not agree, “5” = agree a lot). The evaluation also contained a more relative assessment, where video-feedback was compared to text-based feedback (“1” = Text is better, “5” = Video-feedback is better). The dimensions used here were intelligibility, richness in information, individualization, and overall perceived quality of the feedback.

At the end of the evaluation there were three open questions, asking for the advantages, the disadvantages, and further observations on this method compared to the more traditional modus of written feedback.

Analysis

We report the means and standard deviations for each category and conducted a one sample t-test to see if the results are statistically significantly greater than the middle point of the answer scale (i.e., “neutrality”).

Quantitative Results

Figure 2 gives the means and standard errors of the learners’ absolute rating of multimodal video-feedback. Learners agreed with all (positive) statements about video-feedback; it is judged to be informative (M = 4.18, SD = 1.05), timesaving (M = 4.03, SD = 1.10), personal (M = 4.32, SD = 1.23), and instructive (M = 4.00, SD = 1.10). All these means are significantly different from neutrality (neutrality = 3.00, p < 0.01).

FIGURE 2
www.frontiersin.org

FIGURE 2. Rating of the absolute questions (“5” = agree a lot that multimodal video-feedback is associated with the respective attribute).

The students also rated multimodal video-feedback as relatively superior to traditional feedback on all dimensions (see Figure 3). This was especially true for the attributes of individualization (M = 4.22, SD = 0.88) and intelligibility (M = 4.00, SD = 1.14) and slightly less so for overall perceived quality of the feedback (M = 3.87, SD = 0.83) and richness in information (M = 3.54, SD = 1.24). All these means are significantly different from neutrality (neutrality = 3.00, p < 0.01).

FIGURE 3
www.frontiersin.org

FIGURE 3. Rating of the relative questions (“5” = video-feedback is perceived to be better in terms of the respective attribute; “3” = both methods of feedback are rated equal).

Qualitative Results

We applied thematic analysis on the open-ended questions of the survey to distill the most important themes and summarize the learners’ perceptions. Overall, this mode of giving and receiving feedback was a much-welcomed alternative in the context of distance learning. Students perceived it as an innovative method that facilitates learning about the research process and research methods. One student expressed this in following words: “I got video-feedback for the first time in this class. I find it very positive, in principle (especially now in distance learning)” (Student, all translations by the second author).

The following two main advantages of multimodal video-feedback were mentioned. First, since the feedback included the visualization of the feedback-giver’s process of going through the feedback object, it was described not only to be informative, but very detailed and focused on the problem areas for improvement. This allowed the students to comprehend what the feedback-giver was referring to and mitigated “translation problems”; that is, miscommunication followed by disorientation on the side of the receiver regarding what the feedback meant and what it was referring to: “It´s a good way to really show what you mean and where you mean it, right in the document” (Student).

Especially when producing video-feedback the verbal communication of feedback was described as advancing the comprehension of feedback, since verbal feedback can be articulated more precisely.

Second, the feedback—especially being compared to written feedback—was apprehended as being more personal, which was especially appreciated in the context of distance learning: “It was nice to receive feedback in this way, because it gave me a feeling of personal supervision, which is hardly possible anymore in times of COVID” (Student). The receivers felt as being directly and personally addressed and allowed for highlighting the specific area of improvement for the individual (not just related to the task, but also the self). The respondents reported that this enhanced their willingness to engage more deeply with the feedback and carefully try to understand what it is actually saying.

Regarding the aspect of time-investment when receiving multimodal video-feedback, the opinions of the participants were divided. While on the one side this modality of receiving feedback compared to the written modus was perceived as being time-consuming, others emphasized the aspect of “saving time” through clearer communication. The aspect of video-feedback taking more time was first assumed to be related to the nature of the feedback being of greater detail and density. While this assumption remains to be proven, the main issue noted was that key points were not always to be extracted while going through the feedback for the first time. For some, it needed more iterations of going back and forth or through the whole video to carve out the most important points. Also, taking written notes was described as a useful practice. Irrespective of the actual time used, this may hint at a different learning process that is induced by video-feedback as a cognitive stimulus.

This is where the feedback-giver needs to make sure to follow a good structure in order to enhance the quality of the feedback. For example, going through the feedback object step-by-step and then providing a roundup at the end that also gives pointers for prioritization was mentioned as a useful approach.

Related to the comments of the students as having trouble identifying the main points the feedback refers to, one great disadvantage students considered is what we label as “lack of clarity”. Compared to written feedback the students pointed out that the main points of the feedback were not always easily identifiable. Since the feedback is not “in front of the eyes”, you have to take notes and make sure to filter out the less important aspects and decide what one should concentrate on. That said, it can be expected that the majority of individuals that are confronted with this type of feedback for the first time will need some adjustment and learning about how the handle the method at first.

A good path for transitioning to this modality may lay in the suggestion by some of the study participants to reduce the weaknesses of both modes of feedback by combining or complementing video-feedback with written formats:

I like to work with feedback lists (i.e., written feedback) myself, but they have the disadvantage that they are very impersonal and not everyone can handle this direct form of feedback, especially if there is that needs to be revised. Combining video-feedback with verbal (i.e., written) feedback lists would be great. Video-feedback makes it easier to deal with feedback personally and the score lists provide the necessary structure and overview” (Student).

The written feedback could contain bullet points with the main points of what was being said in the video and offer an orientation for the receiver.

Practical Implications and Constraints

We set out to test a new technique of giving feedback that is geared towards the teaching of research methods (but that seems applicable also beyond this narrow scope). We hypothesized that this new technique, which essentially makes the process of giving feedback also the sharable output, holds many benefits for both the learners and the feedback-givers. The mixed-method evaluation of multimodal video-feedback from the perspective of learners, as offered in this article, is, as stated above, only preliminary. That said, the quantitative and qualitative results are not only in agreement with each other, but also suggest very strong positive effects of video-feedback. This is a good starting point for additional, more nuanced studies to investigate how exactly video-feedback delivers value for learners and feedback-givers and what more evidence-based (Froehlich, Forthcoming) suggestions to inform the best practice of video-feedback could be.

One question for discussion would also be whether such a universal practice would be desirable or, in fact, possible. After all, video-feedback is a rather personal practice—which was also highlighted as a very positive aspect in the qualitative part of the analysis. At the same time, this has often been a critical point raised in train-the-trainer seminars conducted on the topic of video-feedback: the practice of thinking aloud, the probable perception of a lack structure, and other features of impromptu speaking may be perceived as a major challenge. But as with all teaching methods, the technique as presented here is just a template and adaptations are necessary to increase the fit to the teacher and the learners. While the learning curve may seem steep in the beginning, a lot can be done to scaffold this process (e.g., reading greater parts of the feedback objects in advance, pausing frequently during the recording, etc.).

On the side of the becoming video-feedback-giver, we deem three competencies/dispositions especially relevant. First, the feedback-giver needs to be competent in giving verbal feedback without much preparation. It does not always need to be fully impromptu, but too much preparation will increase not only the time resources needed to produce the feedback, but also potentially decrease the method’s value. The impromptu reaction—including the original mimics—may deliver important information.

Second, the feedback-giver also needs some tolerance towards making errors (in speaking or having disrupting thoughts), especially because the expertise of the feedback-giver might be questioned by the receiver (cf. Blömer et al., 2021). In the initial attempts of the first author giving video-feedback, some mistakes were edited out before sharing the feedback with the learners. This turned out to be an unfavorable practice, as it again makes the practice more time consuming and conceals the first reaction. It may also lead to less tolerance towards making mistakes on the side of the learners, which is not desirable for many learning environments (Tulis, 2013). It is our experience that making some minor errors and clearly communicating about misunderstandings during the video-feedback is a helpful approach, as it does show the true flow of making sense of the feedback object and may also appear more honest and trustworthy (as privately communicated by students to the first author). Note that the aforementioned aspects do improve as one has more experience and becomes familiar to this feedback method.

Third, a certain degree of technological competence is required. However, it is our perception that this point is usually over-emphasized; teachers who have some experience with distance education will likely find tools in their repertoire that they can reuse also for the purpose of recording and disseminating their video-feedback.

That said, another fruitful avenue for further research could be directed towards the feedback-givers to complement the students’ perspectives that were the focus of this article. What are perceived as strong and weak points from the teacher’s point of view? And how technologically challenging is the process of feedback-giving?

Last, the COVID-19 pandemic and the subsequent change to more digital education provide yet another interesting lens and context to evaluate this feedback modality. While there is too little data to compare students’ pre- and in-pandemic reactions towards video-feedback, it is our perception that the sentiment towards video-feedback has increased. Especially in contexts of 100% distance education, the more personal approach and connection that is offered by video-feedback is very much appreciated. However, as stated before, more research is needed to confirm this hypothesis.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Author Contributions

Conceptualization: DF, Development of the pedagogical design: DF, Data collection: DF, Quantitative analysis: DF, Qualitative analysis: DG; Writing draft: DF; Editing: DF and DG.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Ajjawi, R., Kent, F., Broadbent, J., Tai, J. H.-M., Bearman, M., and Boud, D. (2021). Feedback that works: A realist review of feedback interventions for written tasks. Stud. Higher Edu., 1–14. doi:10.1080/03075079.2021.1894115

CrossRef Full Text | Google Scholar

Anderson, A. D., Hunt, A. N., Powell, R. E., and Dollar, C. B. (2013). Student Perceptions of Teaching Transparency. J. Eff. Teach. 13 (2), 38–47.

Google Scholar

Bernstein, J. L., and Allen, B. T. (2013). Overcoming Methods Anxiety: Qualitative First, Quantitative Next, Frequent Feedback Along the Way. J. Polit. Sci. Edu. 9 (1), 1–15. doi:10.1080/15512169.2013.747830

CrossRef Full Text | Google Scholar

Bienstock, J. L., Katz, N. T., Cox, S. M., Hueppchen, N., Erickson, S., and Puscheck, E. E. (2007). To the point: medical education reviews--providing feedback. Am. J. Obstet. Gynecol. 196 (6), 508–513. doi:10.1016/j.ajog.2006.08.021

PubMed Abstract | CrossRef Full Text | Google Scholar

Bijami, M., Pandian, A., and Singh, M. K. M. (2016). The Relationship between Teacher's Written Feedback and Student's' Writing Performance: Sociocultural Perspective. Ijels 4 (1), 59–66. doi:10.7575/aiac.ijels.v.4n.1p.59

CrossRef Full Text | Google Scholar

Blömer, L., Voigt, C., and Piwowar, A. (2021). Videoproduktion: Entwicklung eines adaptiven Wegweisers für Hochschullehrende. Informatik 2020. Bonn: Gesellschaft für Informatik. doi:10.18420/inf2020_44

CrossRef Full Text

Burke, D. (2009). Strategies for using feedback students bring to higher education. Assess. Eval. Higher Edu. 34 (1), 41–50. doi:10.1080/02602930801895711

CrossRef Full Text | Google Scholar

Cavalcanti, A. P., Diego, A., Mello, R. F., Mangaroska, K., Nascimento, A., Freitas, F., et al. (2020). How good is my feedback? Proc. Tenth Int. Conf. Learn. Analytics Knowledge, 428–437. doi:10.1145/3375462.3375477

CrossRef Full Text | Google Scholar

C. Reidsema, L. Kavanagh, R. Hadgraft, and N. Smith (Editors) (2017). The Flipped Classroom: Practice and Practices in Higher Education. 1st ed. 2017 (Berlin: Springer)

Google Scholar

Dowden, T., Pittaway, S., Yost, H., and McCarthy, R. (2013). Students' perceptions of written feedback in teacher education: ideally feedback is a continuing two-way communication that encourages progress. Assess. Eval. Higher Edu. 38 (3), 349–362. doi:10.1080/02602938.2011.632676

CrossRef Full Text | Google Scholar

D. Rowe, A., and N. Wood, L. (2008). Student Perceptions and Preferences for Feedback. Ass 4 (3), 78–88. doi:10.5539/ass.v4n3p78

CrossRef Full Text | Google Scholar

Frieling, M., and Froehlich, D. E. (2017). Homophilie, Diversity und Feedback: Eine soziale Netzwerkanalyse [Homophily, diversity, and feedback: A social network analysis], Internationales Personalmanagement: Rollen – Kompetenzen – Perspektiven. Implikationen für die Praxis [International human resource management: Roles – competencies – perspectives. Berlin. SpringerGabler

Google Scholar

Froehlich, D. E., Beausaert, S., and Segers, M. (2017). Development and validation of a scale measuring approaches to work-related informal learning. Int. J. Train. Dev. 21 (2), 130–144. doi:10.1111/ijtd.12099

CrossRef Full Text | Google Scholar

Froehlich, D. E., Hobusch, U., and Moeslinger, K. (2021). Research Methods in Teacher Education: Meaningful Engagement through Service-Learning. Front. Educ. 6. doi:10.3389/feduc.2021.680404

CrossRef Full Text | Google Scholar

Froehlich, D. E. (2018). Non-technological learning environments in a technological world: Flipping comes to the aid. N.Appr.Ed.R 7 (2), 88–92. doi:10.7821/naer.2018.7.304

CrossRef Full Text | Google Scholar

Froehlich, D. E. (2021). Video-Feedback Demo Video [Mp4]. Available at: https://osf.io/vf3nx/

Google Scholar

Froehlich, D. E., and Guias, D. (Forthcoming). “Bildungswissenschaft in Begriffen,” in Theorien und Diskursen. 1st Edn, Editors M. Huber, and M. Döll (Wiesbaden: Springer VS).

Google Scholar

Froehlich, D. E., and Winter, C. (2019). “Mehr als Lernvideos—Der Einsatz von Videos in de digitalen Lehre [More than learning videos—The use of videos in digital teaching],” in Tagungsband zur 2. Online-Tagung Hochschule digital.innovativ | #digiPH2 Digital-innovative Hochschulen: Einblicke in Wissenschaft und Praxis. Editors M. L. Kieberl, and S. Schallert (Austria: Verein Forum neue Medien in der Lehre Austria), 126–135.

Google Scholar

Hattie, J., and Timperley, H. (2007). The Power of Feedback. Rev. Educ. Res. 77 (1), 81–112. doi:10.3102/003465430298487

CrossRef Full Text | Google Scholar

Hung, S.-T. A. (2016). Enhancing feedback provision through multimodal video technology. Comput. Edu. 98, 90–101. doi:10.1016/j.compedu.2016.03.009

CrossRef Full Text | Google Scholar

Leibold, N., and Schwarz, L. M. (2015). The Art of Giving Online Feedback. JET, 15, 34–46.

Google Scholar

Lewthwaite, S., and Nind, M. (2016). Teaching Research Methods in the Social Sciences: Expert Perspectives on Pedagogy and Practice. Br. J. Educ. Stud. 64 (4), 413–430. doi:10.1080/00071005.2016.1197882

CrossRef Full Text | Google Scholar

Mahfoodh, O. H. A. (2017). "I feel disappointed": EFL university students' emotional responses towards teacher written feedback. Assessing Writing 31, 53–72. doi:10.1016/j.asw.2016.07.001

CrossRef Full Text | Google Scholar

Middendorf, J., and Pace, D. (2004). Decoding the disciplines: A model for helping students learn disciplinary ways of thinking. New Dir. Teach. Learn. 2004 (98), 1–12. doi:10.1002/tl.142

CrossRef Full Text | Google Scholar

Onwuegbuzie, A. J., and Wilson, V. A. (2000, November 5). Statistics Anxiety: Nature, Etiology, Antecedents, Effects, and Treatments: A Comprehensive Review of the Literature. Annual meeting of the Mid-south Educational Research Association, Lexington, Kentucky. https://files.eric.ed.gov/fulltext/ED448202.pdf

Google Scholar

Orsmond, P., and Merry, S. (2011). Feedback alignment: effective and ineffective links between tutors' and students' understanding of coursework feedback. Assess. Eval. Higher Edu. 36 (2), 125–136. doi:10.1080/02602930903201651

CrossRef Full Text | Google Scholar

Owen, L. (2016). The Impact of Feedback as Formative Assessment on Student Performance. Int. J. Teach. Learn. Higher Edu. 28 (2), 168–175.

Google Scholar

Poulos, A., and Mahony, M. J. (2008). Effectiveness of feedback: the students' perspective. Assess. Eval. Higher Edu. 33 (2), 143–154. doi:10.1080/02602930601127869

CrossRef Full Text | Google Scholar

Rand, J. (2017). Misunderstandings and mismatches: The collective disillusionment of written summative assessment feedback. Res. Edu. 97 (1), 33–48. doi:10.1177/0034523717697519

CrossRef Full Text | Google Scholar

Talbert, R., and Bergmann, J. (2017). Flipped Learning: A Guide for Higher Education Faculty. Sterling, VA; Stylus Publishing

Google Scholar

Tulis, M. (2013). Error management behavior in classrooms: Teachers' responses to student mistakes. Teach. Teach. Edu. 33, 56–68. doi:10.1016/j.tate.2013.02.003

CrossRef Full Text | Google Scholar

Yu, S., Zheng, Y., Jiang, L., Liu, C., and Xu, Y. (2021). “I even feel annoyed and angry”: Teacher emotional experiences in giving feedback on student writing. Assessing Writing 48, 100528. doi:10.1016/j.asw.2021.100528

CrossRef Full Text | Google Scholar

Keywords: feedback, higher education, multimodal feedback, project-based learning, research methods, video, video-feedback, teaching method

Citation: Froehlich DE and Guias D (2021) Multimodal Video-Feedback: A Promising way of Giving Feedback on Student Research. Front. Educ. 6:763203. doi: 10.3389/feduc.2021.763203

Received: 23 August 2021; Accepted: 06 October 2021;
Published: 19 October 2021.

Edited by:

Sarah Marrs, Virginia Commonwealth University, Richmond, United States

Reviewed by:

David Marshall, Auburn University, Auburn, United States
Bob Edmison, Virginia Tech, Blacksburg, United States

Copyright © 2021 Froehlich and Guias. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Dominik Emanuel Froehlich, dominik.froehlich@univie.ac.at

Download