Skip to main content

BRIEF RESEARCH REPORT article

Front. Commun., 27 October 2023
Sec. Language Communication
Volume 8 - 2023 | https://doi.org/10.3389/fcomm.2023.1178516

“For the Record”: applying linguistics to improve evidential consistency in police investigative interview records

  • 1Aston Institute for Forensic Linguistics, College of Business and Social Sciences, Aston University, Birmingham, United Kingdom
  • 2School of Social Sciences and Humanities, Loughborough University, Loughborough, United Kingdom

The “For the Record” project (FTR) is a collaboration between a team of linguistic researchers and police in the England & Wales jurisdiction (E&W). The aim of the project is to apply insights from linguistics to improve evidential consistency in police interview transcripts, which are routinely produced by transcribers employed by the police. The research described in this short report is intended as a pilot study, before extension nationally. For this part of the project, we analysed several types of data, including interview audio and transcripts provided by one force. This identified key areas where current transcription practise could be improved and enhanced, and a series of recommendations were made to that force. This pilot study indicates that there are three core components of quality transcription production in this context: Consistency, Accuracy, and Neutrality. We propose that the most effective way to address the issues identified is through developing new training and guidance for police interview transcribers.

1. Introduction

The FTR project applies linguistic findings to the process of producing written transcripts of police investigative interviews with suspects. The current standard procedure is that these interviews are audio recorded, then for any case which will proceed to court,1 a transcript is produced by administrative staff employed by the relevant police force. This process is of particular importance given that these are evidential documents, presented in court as part of the prosecution case, yet we know from linguistics that original spoken data are necessarily substantially altered through the process of being converted into written format (see below). Yet once a transcript or ROTI (Record of Taped Interview) has been produced, it is generally heavily relied upon rather than the audio recording, making its accuracy all the more important.

The overall objective of this research is to substantially increase the accuracy and consistency of investigative interview evidence, especially in terms of the representation of spoken language features. Our aim is to enable transcribers to produce interview records which encapsulate more of the meaning conveyed by the original spoken interaction, and to enable consistency of interpretation of features such as punctuation and pauses for the reader (i.e., investigating officers, Crown Prosecution Service, courts, juries), thus removing a major source of subjective and potentially inaccurate interpretation of criminal evidence. We emphasise that the intended outcome is not the production of a “perfect” transcript, since this is an impossibility. Instead, the intention is to reduce the “contamination” or distortion which transcription can introduce, and to raise awareness in legal contexts of the fundamental limitations of transcripts.

2. Rationale

In E&W, before the full national implementation of the Police & Criminal Evidence Act 1984 (PACE), written records of interviews with suspects were created by the interviewer after the event, based on any contemporaneous notes which had been made during interview, and on their own memory. A series of infamous miscarriages of justice (e.g., Bridgewater Four, Derek Bentley; see e.g., Coulthard, 2002) shone a harsh light on this practise, proving that these records could not only be highly inaccurate, but even completely fabricated. PACE therefore introduced the mandatory audio recording of all interviews with suspects (with only a handful of exceptions, e.g., in terrorism cases). This was of course a substantial improvement to policing practise, and one in which E&W has led the way internationally. Audio-recorded interviews have subsequently been treated as the solution to the problem of inaccurate or unreliable interview evidence; however, that is not entirely the case. In fact, it gives rise to another potential source of contamination or distortion, through the production of the written record of the interview. Although the audio (or video) recording is always available, in practise the written transcript is heavily relied upon once it has been produced (see Haworth, 2018). The written record becomes a central piece of criminal evidence, passed on to the Crown Prosecution Service (CPS) as part of the case file, then presented as part of the prosecution case in court, thereby being routinely presented to the jury as part of the package of evidence on which they must reach their verdict. Juries are of course free to make whatever judgments they wish of these materials; we do not seek to interfere with this. Our concern here is simply that any evidence presented to the court should be as accurate and unaltered as possible.

However, we know from decades of research in linguistics that it is not possible to convert spoken language into a written text without changing it. Linguistic research has indicated that spoken and written modes are essentially different “languages;” they are non-equivalent (e.g., Biber, 1988; Halliday, 1989). Conversion from one to the other is therefore almost like a process of translation and interpretation; this means it is necessarily subjective and inexact. The challenges of transcribing spoken data have in fact long been addressed as a methodological challenge by linguists (see e.g., Ochs, 1979; Edwards and Lampert, 1993; Leech et al., 1995; Bucholtz, 2007, 2009), since we ourselves often need to create written records of the spoken data we record for our research. This has been a particular methodological concern in Conversation Analysis (e.g., Jefferson, 2004; Hepburn and Bolden, 2012). This work shows that transcription is actually a very complex and challenging task, if it is to be done accurately and fairly. A particular problem identified is that it is impossible for any transcriber not to bring in their own perspectives and unconscious biases; in fact Bucholtz (2007) describes transcription as “an inherently and unavoidably sociopolitical act” (p. 802).

Yet transcription of speech routinely occurs in various legal contexts, several of which have been studied by linguists. All such studies have found serious problems with the official transcripts produced. This includes studies of transcripts of courtroom proceedings (e.g., Walker, 1986, 1990; Eades, 1996; Tiersma, 1999, p. 175–99), covert recordings (e.g., Shuy, 1993, 1998; Fraser, 2014, 2018, 2022), and interpreted interviews (e.g., Filipović, 2022); see also our own prior work which informs this project (Haworth, 2018; Richardson et al., 2022).

All of the above research background indicates a strong likelihood that official transcripts of police investigative interviews may not be as accurate and balanced as is generally taken for granted. This is even more the case when we consider that most Records of Taped Interview (ROTI) involve a good deal of editing and summarising, rather than being an attempt to provide a “full,” “verbatim” transcript. Editing, or summarising, is a highly selective and subjective process, with the summariser having to make choices as to what to include and what to omit. This process has not been the subject of sustained prior research (although see Haworth, 2018; Filipović, 2022).

Despite these clear warning signs from the linguistic research, none of this has yet made its way into professional practise within the legal system. In fact, not only are the potential problems not recognised, it has actually been built into practise through case law2 and legislation3 that tapes, transcripts, and summaries should be treated as interchangeable, and in essence identical. Our starting point for this project, then, is that potentially serious contamination of interview evidence is currently routinely overlooked and unrecognised; but also that linguistic research and analysis can readily be applied in order to redress this.

3. Method

Given that interview records are produced within force, and the process varies from force to force,4 we chose to work with one force first as a pilot project. This enabled us to conduct detailed analysis across all aspects of the process, from multiple angles and methodological approaches; in other words to prioritise depth over breadth. It enabled us to take into account specific local practises, and also ensures that our findings are as relevant as possible to our partner force. We collected two types of data from our partner force: (1) interview recordings and their corresponding official transcripts; and (2) practitioner input through focus groups and an online questionnaire. Our research questions for these data were:

• How are written records of interview currently produced and used in this force?

• Is there an unrecognised problem regarding evidential consistency in those records?

Alongside this, we conducted experiments to test our hypotheses around the changes in format of the data (i.e., changing from spoken to written, and transcription choices) having an effect on its interpretation. This was to ensure that there was a sound evidence base for any recommendations we made.

The project thus involved three strands, each with its own methodological approach and data, but which were interrelated with each informing the other as the project progressed. Findings from all three strands were then combined into one unified analysis, through which key themes were identified. As an overall objective, we sought to investigate what insights from linguistics can offer in terms of improving the process.

3.1. Experiments

Our experiments were designed to do two key things: (1) test the assumption that people treat audio and written information similarly, and (2) examine how changes in the representation of different linguistic features could influence the way people think about the information contained within transcripts.

In an initial experiment (see Deamer et al., 2022), a 3-min clip of a publicly-released police interview with a suspect in a UK murder enquiry, sourced from You Tube, was used to elicit views about the interviewee from participants, recruited using convenience sampling (data provided by our partner force were not used due to data protection and confidentiality). A total of 30 adult participants heard the original audio recording; 30 saw a written transcript of the same extract (groups were matched for gender and age). The transcript was produced by the research team with the aim of including as much detail as possible, while also maintaining legibility for a lay audience. Participants were then presented with a series of questions (quantitative and qualitative) to determine their interpretation of the interview, and the interviewee. We wanted to assess whether there would be any differences in the judgements of those who heard the audio compared with those who read the transcript. Responses to questions about what, in the language, had led participants to give their answers, enabled us to identify specific features which may have influenced participants' perceptions.

We then ran a second experiment which further explored these issues (see Tompkinson et al., 2023). Using the same interview data, but additionally manipulating one variable which both prior research (e.g., Nakane, 2007, 2011; Heydon, 2011) and the qualitative findings of the first experiment indicated to be of interest, we created versions of the transcript which represented pauses/silent hesitations in different ways. This experiment was much larger, eliciting responses from 250 participants, recruited via Prolific.5 Again, we tested whether changing the mode of representation (audio vs. transcript) would affect participants' perceptions, and we also wanted to assess whether the different representations of pauses would impact the judgements that people were prepared to make about the interviewee.

3.2. Linguistic analysis of interview data

A total of 25 recent audio-recorded suspect interviews and 4 video-recorded witness interviews,6 ranging from 6 to 92 min, and their accompanying transcripts, were provided for analysis by the force under a Data Processing Agreement, and with ethical approval from Aston University. The original data were redacted, anonymised and pseudonymised on police premises. A comparative analysis was undertaken of the interactional activities captured by the audio recording, and what was represented in the written records (see Richardson et al., 2023). This involved close qualitative linguistic analysis informed predominantly by Conversation Analysis. This enables us to identify the social actions that are performed by speakers as they interact, and to evidence the substantial changes that can occur in the process of transforming the spoken interaction into a written representation. In particular, this makes features of the talk which go beyond the words spoken accessible and analysable, including through documenting them through detailed technical transcripts (following Jefferson, 2004).

3.3. Questionnaires and focus groups

An online, anonymous questionnaire was completed by the full cohort of force transcribers at date of completion (n = 9), covering basic aspects of their job and their approach to transcribing, along with a very short transcription task. Focus groups with transcribers (n = 6) and police interviewers (n = 13), recruited as volunteers via our internal force contact, were subsequently conducted on police premises across 3 sites, to minimise participant inconvenience. These were held separately, thereby amounting to 6 focus groups and over 11 h of audio-recorded data. This was anonymised and transcribed, and a thematic analysis undertaken using NVivo. Once the main research was concluded the research team returned to the force for two further focus groups, at which we presented our main findings and proposed recommendations, inviting feedback and discussion. These return focus groups combined both transcribers and interviewers from the original focus groups, enabling direct discussion between these cohorts.

4. Results

The FTR project has produced a large volume of research findings. More detailed findings of the individual project strands are available in Deamer et al. (2022), Richardson et al. (2023), and Tompkinson et al. (2023), with more to follow. Detailed combined findings and outcomes from the FTR project as a whole will also be published in due course. The key combined findings can be summarised as follows:

• Transcribers are highly aware of the stakes and the potential consequences of their work, and they take this very seriously, aiming to produce balanced and fair records. However, numerous aspects of current transcription practise undermine this aim.

• The transcribers receive no training in transcription. Instead, they report relying on their peers for ad hoc support; practise has thus developed within-group, without official input or oversight. They also receive very little, if any, feedback on the transcripts they produce. Bad or inappropriate practise can therefore easily become embedded at a local level, and there is no mechanism for ensuring consistency. There is also no established checking procedure, and therefore no system in place to catch errors and mistakes.

• For the parts of the interview rendered “verbatim”/“in full,” we did not find systematic or widespread problems with the basic accuracy of recording the bare words spoken. However, some errors were found, including simple “typos” but also instances where content was apparently misheard, leading to incorrect transcription. Such errors may not be common, but they can be of real significance: we identified at least two instances where meaning was affected regarding important evidential points. For example, one transcript included “he met someone knew.” This confuses two very different propositions, with opposite meanings: “he met someone he knew,” or “he met someone new.” It is not possible to work out which was meant from this transcript alone.

• There was variation in use of the standard layout on the interview transcript pro forma, which in places could give rise to unintended interpretations. As well as consistency, it gives rise to questions of neutrality, given that these involve subjective decisions on the part of the transcriber. For example, the most common practise was to use a new text box for a new speaker's turn. But we also found examples of turns being split into more than one box, which has the effect of visually highlighting a particular part of that turn, creating a risk that that part is taken out of context and thereby misinterpreted. For example, an apparently incriminating admission was “highlighted” in this way, but had been separated from the very important conditional it followed on from: the interviewee stated that they didn't know what had happened and had no memory of doing the act they were accused of, but then said “if there's enough evidence to say I've done it I'll put my hands up and say || yeah I've done it.” These final words were presented on a new line in a new text box with the timing also given alongside, all of which gave them arguably undue prominence.

• Consistency was found to be a key issue. There was a lack of consistency in the way that different transcribers represented different aspects of speech in the transcripts, giving rise to potential confusion as to what was meant. There were also instances of inconsistency within the same transcript. For example, several different methods were observed to be used to represent inaudible parts of the recording, such as “\\\ unintelligible”; “inaudible”; “……”. As an added complication, the same resource was found to be used to represent different features. For example, a series of dots (“…..”) was used to indicate four different phenomena: transition from one mode of transcribing to another (e.g., summary to “verbatim”); silence; cut-off talk; and overlapping speech. Unsurprisingly, interviewers reported a range of interpretations of this feature when they encounter it in their interview transcripts, demonstrating that meaning is being lost due to this practise. One interviewer described having to go back to the transcriber for clarification of the meaning of “…” in one case, demonstrating how transcription inconsistency is giving rise to inefficiency.

• Another key identified area of inconsistency was in the representation of pauses/silence. This is of importance given that these can be highly significant interactionally (e.g., Nakane, 2007), and thus create meaning for listeners, as borne out in our experimental findings. Our finding that pauses were either omitted, or transcribed inconsistently, in our dataset is therefore a cause for concern.

• Emotion is not represented in the transcripts in our dataset. We use the term “emotion” here to cover a broad range of audible non-verbal aspects of a person's talk, such as laughter or crying. The display of emotion is a crucial part of human social interaction, conveying a great deal of additional meaning beyond the bare words spoken. This was borne out in our experimental findings, with numerous participants commenting on displays of the interviewee's emotion either as heard in the audio or represented in the transcript. The omission of emotion from transcripts can therefore have serious consequences, especially where the emotional state of the interviewee becomes relevant evidentially. This is a phenomenon with which interviewers are very familiar, as reflected in several case examples discussed in the focus groups, including interviewee displays of anger and loss of emotional control. Interestingly, it is also well recognised by the transcribers, which begs the question as to why they do not include such details. The main answer that arose from the focus groups was that it is often mistakenly viewed as being subjective, when they are aiming to be as objective as possible. However, what is currently not recognised is that omission is a subjective choice in itself, affecting the meaning conveyed. The transcribers may have the right intentions, but current practise is arguably achieving the opposite outcome to that desired.

• However, our experimental work indicates that determining the most appropriate way to represent such features in a transcript is not as straightforward as first envisaged, and further work is therefore required before firm recommendations can be made as to best practise and standardisation of interview transcription.

• The process of summarising, rather than writing everything said “verbatim,” has a substantial impact on the official record. Transcribers are not provided with specific guidance or training about how to summarise information, or about what to include. Instead, they are left to attempt to identify the most evidentially relevant details themselves, without any legal training or experience. There was extensive use of summaries across the transcripts analysed, and we found a wide variety of practise, with once again an overall lack of consistency. In addition, the requirement for the use of a reporting verb when producing such summaries (“Smith said/claimed/insisted…”) introduces a further avoidable element of subjectivity and transcriber interpretation. Further, the fact that the questioning sequence is often not preserved in the transcript is a source of frustration for interviewers, who may well have had specific tactical and evidential reasons for including certain aspects whose significance is (understandably) not recognised by the transcriber and therefore omitted from the record.

• Interviewers reported viewing transcripts as an inadequate reflection of the actual interview interaction, and therefore tend not to use them as an investigative tool. Instead, they may rely on their notes and memory of the interview. This is a risky practise and of some concern.

• Overall, the strong message from the focus groups with both transcript producers and users is that official transcripts currently do not capture interviews effectively. Practitioners are aware of some inaccuracies in what was said, but mainly recognise a failure to capture how it is said. There was strong support for standardisation and training being introduced.

Overall, we conclude that the current process for producing interview records in this force does result in problems with evidential consistency. In other words, this type of evidence undergoes alteration as it is processed; something which would likely not be considered acceptable for physical evidence, for example. However, do these types of changes actually matter in practise? Our experimental work sought to address this.

• Our experimental findings demonstrate how the format in which police interview evidence is presented can significantly affect how it is interpreted, supporting our basic point that converting interview evidence into written format can significantly alter how that evidence is perceived. This demonstrates the importance of the factors identified above, and the potentially serious implications, particularly for the use of interview transcripts as evidence in court.

• Our initial experiment (Deamer et al., 2022) found a range of significant differences between judgements of the interviewee depending on whether participants were presented with an audio recording or transcript of the interview. Those who read the transcript perceived the interviewee as more anxious, less relaxed, more agitated, more nervous, more defensive, less calm, less cooperative and, perhaps most importantly, less likely to be telling the truth [χ2(1) = 4.022, p = 0.045]. Participants identified a range of language and speech features which influenced these perceptions of the interviewee.

• Our expanded second experiment (Tompkinson et al., 2023) replicated these findings, again showing significant differences across judgements of the interviewee between the Audio and Transcript conditions. In this study, the interviewee was judged as being significantly less credible, plausible, sincere, cooperative, calm, friendly and relaxed by participants who read the transcript, as well as significantly more agitated, nervous, surprised and panicked. The interviewee was also significantly more likely to be judged as not telling the truth if the person making the judgement read a transcript as opposed to listening to the audio recording [χ2(2) = 23.82, p < 0.001), with a similar number of participants using the “don't know” option in both conditions. Overall, these findings show the clear potential for instability in perception between audio recordings and transcripts of the same interview data.

In order to address the issues identified through our research, we have created a set of criteria which encapsulate our findings, using terms which are readily understandable and applicable by a non-linguistic, non-technical user group: consistency, accuracy and neutrality (CAN). We propose these three areas as the foundational features that should underpin any police interview transcript. Our key recommendation is the introduction of training and guidance to embed the CAN model into police transcription practises; however further research is required to assess its applicability beyond our pilot force.

5. Discussion

Overall, this project has demonstrated that transcription practises certainly do matter in this context. The way in which police interview evidence is presented can have a substantial effect on how it is perceived and interpreted, to the point of altering whether receivers believe an interviewee is telling the truth or not. Such differences should not occur in the presentation of criminal evidence. Likewise, accuracy and consistency should be expected as minimum requirements for official interview transcripts, so that they can be correctly evaluated by readers, especially those tasked with using interview records as part of the evidence on which to base vital decisions about the interviewee's future (e.g., CPS, judge, jury). Yet we have also shown that transcripts are currently less accurate and consistent than we might wish, especially when it comes to the practise of summarising parts of the interview. Leaving such an important evidential task to clerical staff with no legal training—as appears to be standard across the sector—seems especially troubling and risky.

Some aspects of this can readily be addressed, and series of recommendations were produced for our partner force. These are a combination of known good practise, points which emerged from our research, and solutions suggested by police practitioners themselves. This comes with the recognition that many factors extend well beyond the remit of individual police forces and will require national uptake and implementation, which in turn requires extending the scope of the FTR project beyond one force. Some aspects may even require changes to criminal procedure, which we acknowledge is a steep hill to climb. However, we continue to work towards these objectives, through engaging more police forces, national organisations and policy initiatives, and through conducting further research.

Our experimental findings indicate that solutions around introducing transcription standardisation are not as straightforward as we had initially hoped, so our original intention of producing a set of implementable standards cannot yet be realised. However, these findings demonstrate the importance of not making simplistic recommendations based on assumptions, but of instead conducting targeted research in order to provide a sound evidence base for best practise. It should be emphasised that the research presented here is a pilot project, and we hope that it has successfully demonstrated that this is an issue worthy of continuing, fuller study.

Data availability statement

The datasets presented in this article are not readily available because they include highly sensitive and confidential material including police interviews with suspects, and discussions of such material with police practitioners. As such, even in anonymised form it is considered too sensitive for public access. Requests to access the datasets should be directed to k.haworth@aston.ac.uk.

Ethics statement

The studies involving human participants and police data were reviewed and approved by the Aston Institute for Forensic Linguistics Research Ethics Committee, Aston University. Written informed consent for participation in the research was provided by the participants of the experimental study, focus groups and questionnaire. The police interviews were provided for analysis by the force under a Data Processing Agreement and were fully redacted, anonymised, and pseudonymised.

Author contributions

KH devised and led the project overall, conducted the questionnaire and focus groups, and produced the synthesised analysis of the combined project findings. KH is the main author of this report. JT took over the experimental strand from FD in January 2022, conducted the second and subsequent experiments, and contributed substantial additional analysis across the project. ER led the interview data analysis strand and contributed additional analysis across the project. FD devised the experimental strand and conducted the first experiment. MH contributed to the design of the interview data analysis strand and made a substantial contribution to the analysis of the interview data. In drafting this text, a full 60-page report was initially written by KH, which includes analysis and writing from all team members. From this document JT wrote a 2-page overview summary, which KH then expanded into this short report. All authors contributed to manuscript revision, read, and approved the submitted version.

Funding

The research was funded as part of an award to the Aston Institute for Forensic Linguistics by Research England's Expanding Excellence in England (E3) fund.

Acknowledgments

We gratefully acknowledge the support and cooperation of the police force with which we collaborated for this research project, especially all participants and data providers. Without their dedication, commitment, and enthusiasm, this project would not have happened. We are respecting their wish to remain anonymous.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Footnotes

1. ^For our pilot force, this now only applies to cases which will be heard in the Crown Court, but there appears to be variation in practice across forces.

2. ^R v Rampling [1987] Crim LR 823.

3. ^s.133 & 134(1) Criminal Justice Act 2003.

4. ^As revealed through FOI enquiries made by us, and the lack of any national guidance.

5. ^Available online at: https://www.prolific.com/.

6. ^The use of witness interviews in the legal process is very different to suspect interviews, especially in terms of their presentation as evidence in court. However, we included these in this strand of the project as part of our analysis of current transcription practices, since they are produced by the same transcribers in the same conditions.

References

Biber, D. (1988). Variation Across Speech and Writing. Cambridge: Cambridge University Press. doi: 10.1017/CBO9780511621024

CrossRef Full Text | Google Scholar

Bucholtz, M. (2007). Variation in transcription. Disc. Stud. 9, 784–808. doi: 10.1177/1461445607082580

CrossRef Full Text | Google Scholar

Bucholtz, M. (2009). Captured on tape: Professional hearing and competing entextualizations in the criminal justice system. Text Talk 29, 503–523. doi: 10.1515/TEXT.2009.027

CrossRef Full Text | Google Scholar

Coulthard, M. (2002). “Whose voice is it? Invented and concealed dialogue in written records of verbal evidence produced by the police,” in Language in the Legal Process, ed. J. Cotterill (Basingstoke: Palgrave Macmillan), 19–34. doi: 10.1057/9780230522770_2

CrossRef Full Text | Google Scholar

Deamer, F., Richardson, E., Basu, N., and Haworth, K. (2022). For the Record: Exploring variability in interpretations of police investigative interviews. Lang. Law/Linguagem Direito. 9, 23. doi: 10.21747/21833745/lanlaw/9_1a2

CrossRef Full Text | Google Scholar

Eades, D. (1996). “Verbatim courtroom transcripts and discourse analysis,” in Recent Developments in Forensic Linguistics, eds. H. Kniffka, S. Blackwell, and M. Coulthard (Frankfurt am Main: Peter Lang GmbH), 241–254.

Google Scholar

Edwards, J. A., and Lampert, M. D. (1993). Talking Data: Transcription and Coding in Discourse Research. Hillsdale, NJ: Lawrence Erlbaum.

Google Scholar

Filipović, L. (2022). The tale of two countries: Police interpreting in the UK vs. in the US. Interpreting 24, 254–278. doi: 10.1075/intp.00080.fil

CrossRef Full Text | Google Scholar

Fraser, H. (2014). Transcription of indistinct forensic recordings: problems and solutions from the perspective of phonetic science. Lang. Law/Linguagem Direito. 1, 5–24. Available online at: https://ojs.letras.up.pt/index.php/LLLD/article/view/2429

Google Scholar

Fraser, H. (2018). “Assisting” listeners to hear words that aren't there: dangers in using police transcripts of indistinct covert recordings. Austral. J. Forensic Sci. 50, 129–139. doi: 10.1080/00450618.2017.1340522

CrossRef Full Text | Google Scholar

Fraser, H. (2022). A framework for deciding how to create and evaluate transcripts for forensic and other purposes. Front. Commun. 7, 898410. doi: 10.3389/fcomm.2022.898410

CrossRef Full Text | Google Scholar

Halliday, M. A. K. (1989). Spoken and Written Language. Oxford: Oxford University Press.

Google Scholar

Haworth, K. (2018). Tapes, transcripts and trials: The routine contamination of police interview evidence. Int. J. Evid. Proof 22, 428–450. doi: 10.1177/1365712718798656

CrossRef Full Text | Google Scholar

Hepburn, A., and Bolden, G. B. (2012). “The conversation analytic approach to transcription,” in The handbook of Conversation Analysis, eds. J. Sidnell, and T. Stivers (Oxford: Wiley-Blackwell), 57–76. doi: 10.1002/9781118325001.ch4

CrossRef Full Text | Google Scholar

Heydon, G. (2011). Silence: Civil right or social privilege? A discourse analytic response to a legal problem. J. Pragm. 43, 2308–2316. doi: 10.1016/j.pragma.2011.01.003

CrossRef Full Text | Google Scholar

Jefferson, G. (2004). “Glossary of transcript symbols with an introduction,” in Conversation Analysis: Studies from the First Generation, ed. G. Lerner (Amsterdam: John Benjamins). doi: 10.1075/pbns.125.02jef

CrossRef Full Text | Google Scholar

Leech, G., Myers, G., and Thomas, J. (1995). Spoken English on Computer: Transcription, Mark-up and Application. Harlow: Longman.

Google Scholar

Nakane, I. (2007). Silence in Intercultural Communication: Perceptions and Performance. Amsterdam: John Benjamins Publishing. doi: 10.1075/pbns.166

CrossRef Full Text | Google Scholar

Nakane, I. (2011). The role of silence in interpreted police interviews. J. Pragm. 43, 2317–2330. doi: 10.1016/j.pragma.2010.11.013

CrossRef Full Text | Google Scholar

Ochs, E. (1979). “Transcription as theory,” in Developmental Pragmatics, eds. E. Ochs, and B. B. Schiefflen (New York: Academic Press), 43–72.

Google Scholar

Richardson, E., Hamann, M., Tompkinson, J., Haworth, K., and Deamer, F. (2023). Understanding the role of transcription in evidential consistency of police interview records in England and Wales. Lang. Soc. 7, 1–32. doi: 10.1017/S004740452300060X

CrossRef Full Text | Google Scholar

Richardson, E., Haworth, K., and Deamer, F. (2022). For the Record: questioning transcription processes in legal contexts. Appl. Ling. 43, 677–697. doi: 10.1093/applin/amac005

CrossRef Full Text | Google Scholar

Shuy, R. (1998). The Language of Confession, Interrogation and Deception. Thousand Oaks, CA: Sage. doi: 10.4135/9781452229133

CrossRef Full Text | Google Scholar

Shuy, R. W. (1993). Language Crimes: The Use and Abuse of Language Evidence in the Courtroom. Blackwell.

Google Scholar

Tiersma, P. M. (1999). Legal Language. Chicago: University of Chicago Press.

Google Scholar

Tompkinson, J., Haworth, K., Deamer, F., and Richardson, E. (2023). Perceptual instability in police interview records. Int. J. Speech, Lang. Law 30, 22-−51. doi: 10.1558/ijsll.24565

CrossRef Full Text | Google Scholar

Walker, A. G. (1986). Context, transcripts and appellate readers. Just. Quart. 3, 409–427. doi: 10.1080/07418828600089041

CrossRef Full Text | Google Scholar

Walker, A. G. (1990). “Language at work in the law: The customs, conventions, and appellate consequences of court reporting,” in Language in the Judicial Process, eds. J. N. Levi, and A. G. Walker (New York: Plenum Press), 203–244. doi: 10.1007/978-1-4899-3719-3_7

CrossRef Full Text | Google Scholar

Keywords: transcript, interview record, police interview, investigative interview, language as evidence, forensic linguistics, applied linguistics

Citation: Haworth K, Tompkinson J, Richardson E, Deamer F and Hamann M (2023) “For the Record”: applying linguistics to improve evidential consistency in police investigative interview records. Front. Commun. 8:1178516. doi: 10.3389/fcomm.2023.1178516

Received: 02 March 2023; Accepted: 09 October 2023;
Published: 27 October 2023.

Edited by:

Mila Vulchanova, Norwegian University of Science and Technology, Norway

Reviewed by:

Dian Dia-an Muniroh, Universitas Pendidikan Indonesia, Indonesia
Luna Filipovic, University of California Davis, United States

Copyright © 2023 Haworth, Tompkinson, Richardson, Deamer and Hamann. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Kate Haworth, k.haworth@aston.ac.uk

Download