When is Deceptive Message Production More Effortful than Truth-Telling? A Baker’s Dozen of Moderators

Deception is thought to be more effortful than telling the truth. Empirical evidence from many quarters supports this general proposition. However, there are many factors that qualify and even reverse this pattern. Guided by a communication perspective, I present a baker’s dozen of moderators that may alter the degree of cognitive difficulty associated with producing deceptive messages. Among sender-related factors are memory processes, motivation, incentives, and consequences. Lying increases activation of a network of brain regions related to executive memory, suppression of unwanted behaviors, and task switching that is not observed with truth-telling. High motivation coupled with strong incentives or the risk of adverse consequences also prompts more cognitive exertion–for truth-tellers and deceivers alike–to appear credible, with associated effects on performance and message production effort, depending on the magnitude of effort, communicator skill, and experience. Factors related to message and communication context include discourse genre, type of prevarication, expected response length, communication medium, preparation, and recency of target event/issue. These factors can attenuate the degree of cognitive taxation on senders so that truth-telling and deceiving are similarly effortful. Factors related to the interpersonal relationship among interlocutors include whether sender and receiver are cooperative or adversarial and how well-acquainted they are with one another. A final consideration is whether the unit of analysis is the utterance, turn at talk, episode, entire interaction, or series of interactions. Taking these factors into account should produce a more nuanced answer to the question of when deception is more difficult than truth-telling.

deceit. Although many such elements have been included as moderators in deception meta-analyses, their impact has not necessarily been attributed to cognitive (or emotional) exertion, and reliable empirical associations are few. A more coherent framework is therefore wanting.

THE DOMINANT PATTERN
First let us consider the received wisdom that deception is more difficult than truth and some of the evidence that undergirds it. Numerous deception scholars have argued that deception is more effortful than truth-telling (e.g., Zuckerman et al., 1981;Miller and Stiff, 1993;Buller and Burgoon, 1996b;Vrij, 2000;Sporer and Schwandt, 2006). Empirical research has affirmed this view with evidence of measurable psycho-physiological indicators of arousal and stress (e.g., the wealth of research on the polygraph; see Gougler et al., 2011) as well as observable behavioral signs of performance decrements. Deceptive messages are often shorter, slower, and less fluent, with longer response latencies, averted gaze, temporary cessation of gestures and postural rigidity-all potential indicators of deceivers having to think hard (Goldman-Eisler, 1958;Vrij et al., 1996Vrij et al., , 2006Rockwell et al., 1997;Porter and ten Brinke, 2010;ten Brinke and Porter, 2012;Mullin et al., 2014).
That said, it is important to note that the mental machinations associated with deception need not be burdensome or uniformly so. As Buller and Burgoon (1996b) stated in a rejoinder to DePaulo et al. (1996): . . . DePaulo et al. (1996) ascribe to us a highly cognitive view of deception, with deceptive episodes peopled by highly conscious, surveillant liars and equally vigilant, cunning receivers. This is an exaggerated characterization of our assumptions. We have taken some pains in IDT to argue that much sender and receiver activity during deceptive encounters, like other communicative encounters, can be goal driven and strategic yet largely automatic and "mindless" (see, e.g., Kellermann, 1992;Burgoon and Langer, 1995). We see deception running the gamut from the kinds of inconsequential white lies and evasions that populate daily discourse to the lifethreatening kinds of fabrications and omissions that color international conflicts (Burgoon and Buller, 1996, pp. 320-321).
The activities involved in message production are familiar, routinized, overlearned. Mental processes can be activated without the sender necessarily having significant attentional resources diverted. This is especially likely in the dominant laboratory research paradigms, which entail telling harmless and inconsequential lies seldom lasting more than 1 min and addressing single incidents, factual matters, or likes-dislikes. In such cases, messages can be constructed on the fly and modified in response to emergent exigencies. Senders can tap into a host of memories and readily accessible schemas that enable rattling off a deceptive response. The division of labor between verbal and non-verbal components of messages further distributes the workload and reduces the call on cognitive resources. Moreover, if lies are about inconsequential matters, are at the behest of an investigator, and entail no adverse consequences, then any emotional overlay should also be attenuated.
That many forms of deception are "ready-made" does not invalidate that the other processes surrounding their use, form and potential consequences still impose more cognitive work on the sender than does a truthful message related to the same narrative. But the depiction of deceptive message production requires more sophisticated modeling. It is not a question of deception being either easier or more difficult than telling the truth. It can be both.

A BAKER'S DOZEN OF MODERATORS
Here, then, toward a more nuanced, communication-oriented view, are a baker's dozen of factors that should tip the scales in one direction or another. This non-exhaustive collection includes sender factors (i.e., ones that reside within the individual producing a message), message and communication context factors (i.e., ones related to the content and style of the message and to the communication context), relationship factors (i.e., ones inhering in the interpersonal relationship between sender and receiver) that should enable predictions of the circumstances under which deception will be more effortful, and scale of the measurement window under analysis. I illustrate many with evidence from our research program on interpersonal and mediated deception.

Sender Memory Demands
Recent neuroscience research is corroborating what social scientists have suspected for a long time-that the more a lie activates different mental processes, the more mental taxation it imposes on a communicator. In their updated conceptualization of cognitive resource demands associated with (complex) lie production, Sporer and Schwandt (2007) incorporated newer models of working memory such that cognitive load extends beyond accessing details from memory and constructing noncontradictory messages to also activating autobiographical and executive memory functions.
Consider that compared to the truth-teller, who needs only to recall an actual state of affairs, the deceiver must not only access the true state of affairs but must engage executive memory to decide if to deceive, evaluate which forms of deception are more "acceptable" according to one's moral code and choose among those options, conduct a cost-benefit calculus of the relative likelihood of success of alternative forms of deceit, fabricate the response itself, compare it to the truth for possible inconsistencies with known facts, check the deceit against a "plausibility" meter, gage the likelihood of suspicion or detection by the interlocutor, and then actually assemble the verbal and non-verbal components into a normal-appearing message that maximizes credibility, all the while suppressing inapt behaviors and cognitions.
Early explorations of brain functioning with fMRI confirmed that these activities have associated changes in brain activation such that different regions show increased activation during lies than truths (see, e.g., Spence et al., 2001;Ganis et al., 2003;Abe and Greene, 2014). In one such test, Spence et al. (2008) found that the ventrolateral prefrontal cortex (VLPFC) was preferentially activated to inhibit inappropriate and unwanted cognitions and responses when lying about embarrassing material. Using a different method, Mameli et al. (2010) found multiple networks in the prefrontal cortex involved in deceptive responding as well as longer reaction times when communicators responded deceptively relative to truthful responses at baseline. Ito et al. (2011, p. 126) similarly substantiated increased activity in a network of brain regions in the dorsolateral prefrontal cortex (plus longer response latencies) when remembering and reporting truthful and deceptive neutral and emotional events. The authors did not find a similar response during truth-telling, leading them to suggest that "there is an increase in the amount of conflict and higher cognitive control needed when falsifying the responses compared to responding truthfully." A recent meta-analysis (Christ et al., 2009) further established that lying is associated with multiple executive control processes, specifically working memory, inhibitory control, and task switching (i.e., interspersing truthful with deceptive details). Using their activation likelihood estimate method, the authors demonstrated quantitatively that eight of 13 regions and 173 deception-related foci are consistently more active for deceptive responses than for truthful ones.
These robust findings using varied approaches are strong evidence that deception summons memory processes that are more taxing than those associated with truth-telling. Thus, for the predominant research paradigms that have been used, and holding all other conditions constant, deception requires engagement of more cognitive (and/or emotional) resources than does truth-telling 1 .

Sender Motivation, Incentives, and Consequences
This general pattern notwithstanding, three interrelated moderators that can alter this conclusion are motivation, incentives and consequences. Because motivation has often been manipulated through high monetary incentives or escaping adverse consequences, these three factors are operationally confounded. High motivation is thought to muster more effort, which can interfere with performance or improve it. The motivation impairment effect (MIE) asserts that motivation impairs non-verbal performance, thereby making lies more transparent, but also facilitates deceivers' verbal performance (DePaulo and Kirkendol, 1989;Bond and DePaulo, 2006). Empirical findings have been fraught with inconsistencies. Burgoon and Floyd (2000), Burgoon et al. (2012), and  have found both impairment and improvement of non-verbal and verbal performance among motivated deceivers engaged in consequential deception. Additionally, highmotivation truth-tellers (not deceivers) sometimes were most affected. Two meta-analyses (that omitted the aforementioned investigations) found high motivation affected liars and truthtellers equally (Bond and DePaulo, 2006), and high-motivation lies were neither more nor less detectable than other lies (Hartwig and Bond, 2014).
If communicators have little to gain from deceiving or to lose from being caught, lying may pose little more challenge than truth-telling. Aside from the memory demands discussed above, small everyday lies such as fibs and white lies are easy to produce, can draw upon a cache of previously used utterances, and countenance no danger if detected. Lies that are likely to summon more cognitive resources are those that yield high payoff if successful or that place the deceiver in serious jeopardy if uncovered (Porter and ten Brinke, 2010). In an analysis of real high-stakes deception, ten Brinke and Porter (2012) found that deceivers feigning distress over their missing children had difficulty faking sadness, leaked expressions of happiness, and were verbally more reticent and tentative. The authors ascribed these performance decrements partly to increased cognitive load. In high-consequence circumstances, however, truthful individuals may be equally distressed or motivated to succeed, so the difficulty of producing believable messages may be similar regardless of veracity.
The diverse results suggest that motivation is more complicated than presupposed and requires more "unpacking" of its relationship to cognitive effort. From a communication standpoint, motivation should follow social facilitation predictions, aiding overlearned behavior and interfering with less practiced behavior, up to a point beyond which emotional flooding should impair both verbal and non-verbal performance. Communicator skill and experience should dictate the threshold for performance deterioration.

Discourse Genre
Language can be categorized according to genres, which are discourse forms that share similarities in their structure, style, content, intended audience, and context in which they occur. Different genres impose qualitatively different demands on deceivers and truth-tellers. A factual narrative or description, for example, comprises representational and verifiable features that need to be assembled into a cogent, plausible sequence, and supported by relevant details. Whereas truth-tellers are only limited by the acuity of their memory when relaying specifics of an event, deceivers not only must recall the true state of affairs, but must decide how much, if any, to tell. They must compare their alternative version to reality, edit the content and linguistic form, and assemble the elements into a believable chronology.
Comparatively, an opinion lacks verifiability and need not be accompanied by any supportive documentation. Deceivers can easily proffer indisputable conjectures and opinions when asked questions such as, "Who do you think may have stolen the money from the cash draw?" or "What should happen to the thief?", whereas the thoughtful reflections of a truth-teller may require more effort.
Within interactive discourse genres are also variations in form. A face-to-face dialog carries different demands than a monolog or one-to-many speech. When engaged in conversation with another, interlocutors must fulfill multiple communication functions beyond message production itself. First, they must "read" the definition of the situation from contextual cues so as to know what kind of discourse and associated expectations are in force. Because ascertaining identities is usually a high priority, communicators must signal their self-identity (e.g., gender, ethnicity, race, personality), put forth a desired selfpresentation, and size up others' identities. As interactions unfold, they must formulate their own messages and decipher the messages and feedback from their interlocutor. They must also regulate their emotional expressions, exchange relational messages that define the relationship between sender and receiver (e.g., trusting, intimate, equal), perform turntaking responsibilities, and monitor their own communication.
Although human communicators perform these functions in a seemingly effortless fashion, the discourse form can magnify or alleviate some of the effort associated with them. For example, Burgoon et al. (2001) demonstrated that engaging in dialog compared to face-to-face monolog was more difficult initially, but over time, dialog eased the demands on deceivers who were able to share the turn-taking burden with their interlocutor, create a smooth interaction pattern by developing interactional synchrony, adapt to interlocutor feedback, and approximate normal communication patterns 2 .
Another genre, the interview, can also influence the cognitive burden on respondents. The question-answer structure adds predictability to who is supposed to talk when and what the content should be. Language can be borrowed from the interviewer's questions, and questions can be repeated as a stalling technique. Even within interviews are notable differences: Relative to an open-ended, free-wheeling interview, a structured one that requires short-answer replies reduces the degrees of freedom of what can be said and allows deceivers to forecast what is coming next. Many deception experiments are of this latter brief-answer variety, which our research has shown produces substantially different behavioral and psychophysiological responses than open-ended interview protocols (Burgoon et al., 2010).
The illustrative genres mentioned here point to the need to formulate deception-relevant taxonomies of genres so that predictions can be made as to which will intensify or diminish the cognitive effort required of sender and receiver.

Form of Prevarication
Contrary to the claims of McCornack et al. (2014) that virtually all extant deception research bifurcates deception into baldfaced lies or bald-faced truths, and regards only those discourse options as worthy of scholarly investigation, most deception 2 Although some meta-analyses have attempted to analyze the effects of communication context or genre on receiver detection accuracy (e.g., Bond and DePaulo, 2006;Hartwig and Bond, 2014), virtually no research has explicitly tested their effects on sender performance. Hartwig and Bond (2014), for example, had too few samples of different interview types to separate out different categories. Part of the challenge in deriving stable meta-analytic estimates is that only a small fraction of investigations have entailed interactions exceeding 1 min in length. Moreover, genre constructs such as interactivity are multidimensional. To test properly the effects of interaction on senders requires parsing the different attributes (e.g., participation, synchronicity, propinquity, multiplicity of modalities) and testing each independently to isolate the relevant features. scholars recognize that deception includes a variety of forms. A sampling of research across the last five decades and across multiple disciplines has identified such forms of prevarication as white lies, altruistic lies, omissions, concealment, equivocation, evasions, exaggerations, strategic ambiguity, and impostership (see, e.g., Turner et al., 1975;Hopper and Bell, 1984;Miller and Stiff, 1993;Buller et al., 1994;Searcy and Nowicki, 2005;Ennis et al., 2008;Knapp, 2008). The type of prevarication being told will affect the cognitive resources required in its telling.
In his original formulation of information manipulation theory (IMT), McCornack (1997) proposed that deceptive discourse violates conversational implicatures along one or more of Grice's (1989) four dimensions of cooperative discourse: quantity, quality, manner, and relation.  proposed a similar set of five dimensions of information management: completeness (comparable to quantity), veridicality (comparable to quality), clarity (comparable to manner), relevance (comparable to relation), and personalism (see also Buller and Burgoon, 1996a). Under both conceptualizations, some forms of deceit such as omissions are more easily produced than others 3 .
Other times, truth-telling can be more difficult than deceit. Having to convey a "hard" truth to a patient dying of a terminal disease can levy more cognitive taxation than manufacturing a comparable falsehood that there is hope for recovery from the disease. A provocative line of research on whether people lie automatically or must decide to lie has also shown that when cheating offers a high probability of personal gain, people may be quicker to produce self-serving lies than truthful responses. In tempting situations, if a self-benefiting lie is easy to craft and little time is allowed for reflection, lying may be the more automatic response, whereas honesty may necessitate more hesitation, deliberation, and executive control (Shalvi et al., 2012;Tabatabaeian et al., 2015; see also Bereby-Meyer and Shalvi, 2015, for a review of supporting literature). When social bonds are made salient, people also produce lies more quickly that benefit their social group than lies that benefit only self (Shalvi and De Dreu, 2014).
In short, the type of prevarication (or truth) can be located on a continuum from easy to difficult, with cognitive effort for easy lies making them no more challenging than telling the truth.

Expected Response Length
Different kinds of interactions have associated expectations about utterance length. Day-to-day conversations are typified by reciprocation of short turns at talk. Conversing deceivers may project that they can get away with very brief responses while still satisfying conversational expectations. A spouse's query, "How was your day?" is not expected to produce a dissertation on all one's trials and tribulations at work or home. A husband who skipped work to go gambling or a wife on an illicit tryst can safely reply with a breezy "fine." Such brief lies and truths-the bread and butter of much deception research-may differ little in their demands on resources. More penetrating questions like, "Why couldn't I reach you today when I called your cell four times?" require lengthier-and more demanding-accounts.
Standard interview protocols also have associated expectations about what response lengths suffice. Introspective questions require conjectural rather than factual responses, and their nonverifiability may attenuate the memory burden on deceivers. The behavioral analysis interview operates on the premise that innocent people will exhibit the Sherlock Holmes effect: In attempting to aid an investigation, innocent respondents may speculate more than deceivers and widen the pool of suspects. Comparatively, deceivers should minimize conjecture and avoid proposing other suspects for fear of narrowing the pool to themselves (Horvath et al., 2008). A cognitive interview, in which respondents are asked to retell an account from multiple vantage points (Fisher and Geiselman, 1992), requests increasing elaboration and details, something that is expected to be easier for truth-tellers than deceivers to accomplish over repeated retellings (see also Vrij and Granhag, 2012).
Generally, conversations have associated norms and expectations for what kinds of utterances will satisfy the Gricean maxims, and communicators are fairly adept at predicting and fulfilling those expectations. The degree of cognitive difficulty should correlate positively with response length and how much the deceptive response deviates from expected form (with exceptions that can be anticipated in advance).

Sanctioning of Deceit
Most laboratory research involves deceit that is sanctioned by the experimenter rather than being chosen voluntarily by the perpetrator (Frank and Feeley, 2003). The alternative of allowing research participants to choose whether to lie or not creates a confound in that only skillful liars and those with an honestappearing demeanor may choose to lie (Levine et al., 2010). Apart from experimenter-instigated deceit differing behaviorally from that chosen of a deceiver's own volition (Sporer and Schwandt, 2007;Dunbar et al., 2013), the implication outside the laboratory is that deception will vary substantially in form and difficulty as a function of sanctioning and communicator skill (see also IDT regarding communicator skill).
That said, choice and skill may not completely alleviate the added cognitive work associated with deceit. Spence et al. (2008) designed an fMRI experiment in which deceivers could choose to comply or defy an experimenter's request to divulge embarrassing secrets. Results revealed lying activated the VLPFC even under free choice. At the most fundamental level of brain functioning, then, lying still exercises a main effect on cognitive processing.

Communication Medium
The medium of communication itself also influences the degree of cognitive difficulty associated with lying. IDT's first proposition states, "Context features of deceptive interchanges systematically affect sender and receiver cognitions and behaviors; two of special importance are the interactivity of the communication medium and the demands of the conversational task" (Burgoon and Buller, 2015). To the extent that deceivers are interacting synchronously and with all audiovisual modalities available to receivers (e.g., face-to-face, computer-mediated communication, teleconferencing), there are more communication functions to which cognitive resources must be devoted. When modalities are more limited-such as voice or chat-and asynchronous-more resources can be distributed among fewer aspects of message production and with less time press. 4 Consistent with this reasoning, participants in a mock theft experienced the least anxiety and cognitive load when interacting via text, were the most aroused and exercised the most behavioral control when interacting face-to-face, and reported the most cognitive effort when interacting via an unfamiliar audio format (Burgoon et al., 2004;Burgoon, 2015). Thus, leaner and non-interactive media should attenuate cognitive effort.

Preparation
This construct subsumes many related variables-advance thought, planning, rehearsal, or editing. Extemporaneous or unscripted discourse is produced in real time; planned, rehearsed, or edited discourse entails some intervening time interval between the deliberation and construction of a message and its ultimate delivery. Such ex ante preparation may be experimentally manipulated, as in a classic interviewing investigation by O'Hair et al. (1981), or it may be prompted by high-stakes circumstances such as queries about fraudulent financial reporting: ". . . individuals may, for example, prepare extensively before speaking to lower the cognitive burden that can accompany deception, or may undergo voice training in an attempt to sound vocally like the antithesis of someone engaging in deception" (Burgoon et al., 2015, p. 2).
Three meta-analyses (Zuckerman and Driver, 1985;DePaulo et al., 2003;Sporer and Schwandt, 2006) included preparation as a moderator and predicted that planning and rehearsal should facilitate deceptive performance by reducing cognitive/memory load. Although the meta-analyses yielded mixed results and weak effect sizes, planned messages were found to have shorter responses latencies and fewer silent pauses than unplanned ones. More recent research examining higher stakes deception has shown that fraud-relevant utterances were longer and more laden with details than non-fraudulent ones , a pattern duplicated by Braun et al. (2015) in their analysis of deceptive politicians' messages. To the extent that detection accuracy is lower with planned than unplanned deception (Bond and DePaulo, 2006), some of that inaccuracy may be attributable to planned messages being indistinguishable from truth-telling. With advance preparation, communicators are better able to approximate normal, credible communication patterns.

Recency of Target Incident or Issue
Depending on how distant it is, the time frame for requested narratives and accounts will have expectations associated with it for what is a complete, accurate, and clear response. Whereas recent events should impose equal recall difficulty on truthtellers and deceivers, long-ago ones should be harder to recall for conscientious truth-tellers trying to be thorough and accurate than for deceivers fabricating a story or borrowing details from similar events. Some interview protocols like the cognitive interview capitalize on this reversal of expectations in which longer and more effortful answers should be associated with truth. Comparison questions in polygraph testing which are intended to create more mental conflict for truthtellers than deceivers can be made even more challenging when the time frame is open-ended. The question, "Have you ever lied to someone who trusted you?" may prompt truth-tellers to ponder and hesitate more than deceivers. Other aspects of cognitive work unique to deceivers are the activation of executive memory to make the decision to lie, the construction and selection among possible lies and the comparison to the truth, which may guide decisions about which form and content of the lie is likely to be the most efficacious.

Cooperative-Adversarial Relationship
Intertwined with the genre of discourse is whether the relationship between communicators constitutes a cooperative or adversarial one. Grice (1989) proposed that communicators enter encounters with a presumption of cooperativeness. In practice, however, many communication contexts and relationships are recognized as adversarial-criminal interrogations, litigation, labor disputes, negotiations, dispute mediations, and divorce proceedings that place the parties at odds with one another, among others-during which the assumption of cooperativeness is suspended. In adversarial interactions, one cannot even assume that interlocutors are using language in the same way. For example, in organizational contexts, management may practice strategic ambiguity as a way to reduce rather than facilitate understanding.
In other cases, participants with hidden agendas may wish to give the appearance of cooperativeness while covertly violating the Gricean maxims (McCornack, 1997). Under these circumstances the success of the deception will depend on how clandestine the deceit is. Predictions about how much cognitive difficulty is associated with lying should take into account how much cognitive "work" is needed to keep nefarious motives hidden. Unwitting interlocutors, for example, may lessen the difficulty for deceivers by proposing plausible explanations for a sender's otherwise implausible response, thereby helping deceivers construct a believable narrative as a dialog unfolds. Buller and Burgoon (1996b) identified three types of familiarity, one of which is relational familiarity. People who are well acquainted with one another have prior knowledge and a history of behavior against which to judge anything that is said. For the deceiver, this can make devising a plausible lie that evades detection more challenging inasmuch as there are numerous touchpoints against which the deceiver must make mental comparisons before actually uttering the lie. At the same time, deceivers can capitalize on their familiarity with the receiver to adapt lies more specifically to the interlocutor's knowledge bank and can watch the receiver for telltale signs of disbelief. Buller and Aune (1987) found deceivers interacting with familiar others successfully restored their original level of animation, while deceivers interacting with strangers became less immediate and animated over time. Thus, deceivers took advantage of their relationship to improve their performance over time. Burgoon et al. (2001) found similar results in that deceivers interacting with friends rather than strangers were better able over time to manage their informational content, speech fluency, non-verbal demeanor, and image. Presumably the improved performances were accompanied by a corresponding reduction in cognitive difficulty for deceivers relative to truth-tellers. Since receivers seldom expect to be lied to, relational familiarity probably confers more of an advantage on the sender than the receiver.

Communication Unit of Analysis
The sampling unit for deception research and meta-analyses typically has been the single utterance, turn at talk, or answer to a single question. Such samples may be less than 30 s in length. Yet deception may be woven into a series of utterances (e.g., an interview), interpenetrate an entire conversational episode, or span multiple conversations (e.g., multiple interrogations). The span of time from beginning to end of a deception event should affect how difficult it is to produce and maintain. Speculatively, as the number and duration of utterances related to an issue increases, the more cognitively challenging it should be to lie, inasmuch as one must remember what has been said previously, create consistency among utterances, reconcile what is being said with a potentially growing population of known facts, make decisions about which truthful details to divulge, decide what kinds of deception to enact, whether to change strategies (e.g., from concealment to equivocation), and so forth. Lengthy criminal justice interviews and interrogations depend on extended questioning to create more emotional and mental hardship for interviewees. Comparatively, producing brief utterances not only minimizes the amount of decision making, memory searching and message production demands that communicators incur (regardless of their veracity) but can also buy deceivers more time to concoct a credible response and to intersperse truthful details within one's discourse to bolster believability.
The time course of the communication event thus may dictate its demand on cognitive and emotional resources. As the number of utterances or interchanges increases, demands on cognitive and emotional resources should increase differentially-up to an as-yet undetermined point. Beyond that, cooperative interactions should reduce the burden on deceivers by virtue of availing themselves of receiver feedback, making conversational repairs and meshing the dyad's interaction patterns. We have witnessed this in several of our interviewing experiments. In one case, interviewees who were blindsided by unexpected questions initially gave non-fluent and improbable responses but with the aid of unwitting interviewers managed to spin out explanations that the interviewers accepted. Conversely, adversarial interactions such as interrogations may intensify the burden on deceivers. In drawing any conclusions, then, about whether lying is more difficult than truth-telling, it is necessary to specify the sampling unit for the respective truths and liesshort utterances or lengthy ones and single episodes or a series of them. Longer can be more difficult but may also introduce opportunities for countervailing repairs by deceivers.

IMPLICATIONS
What are the implications of this decomposition of moderators of cognitive effort? First, the relationship between deception and cognitive effort is complex and highly variable. In some respects, the issue is one of definition of terms: What constitutes effort? If activation of more brain regions and processes constitutes effort, then deceit can be construed as creating greater actual cognitive work than truth. However, if effort requires some level of awareness, then only under more serious circumstances involving complex lies with significant (favorable or unfavorable) consequences may lying be experienced as more cognitively effortful.
Moreover, a variety of moderators can alter the deceptioncognition relationship, and sometimes in contradictory ways. These previously unidentified or untested moderators may account for the oft-times weak association between presumed cognitive effort and observable behavior. Only if the relevant influences can be parsed will it be possible to make sound and reliable cognition-based predictions and will cognition-based effects be replicable.
Also confounding the picture is that many factors like motivation and incentives exert similar influence on truthtellers, thus making deceptive and truthful behavior patterns indistinguishable.
Too often, researchers have inferred backward from observable cues to likely cognitive causes, but such reasoning is fraught with indeterminacy due to the absence of single oneto-one correspondences between specific indicators and mental work. Even though more memory processes may be engaged, the observable indicators may not betray that work, they may arise from other causes, and they may be associated with both truth and deception.
Given these complicating factors, any cognitive load, cuebased approach may be difficult to utilize in practice. Only if the various moderators can be taken into account will such approaches be fully efficacious.

CONCLUSION
This research topic on whether lying is more effortful cognitively than truth-telling is meant to challenge long-held assumptions. Challenging assumptions is clearly a worthwhile scientific endeavor, and this collection of essays will doubtless enlighten the issue while raising a number of salient considerations.
In the process of addressing this assumption, however, let us not erect false dichotomies, straw-man arguments, or extreme positions that produce more heat than light. For example, the assertions by McCornack et al. (2014) that the differences between truth and deception should all be attributed to memory and information processing is serious overstatement, just as their assertion that current models of deception impute too much cognitive work to deceptive message production is an overly broad gloss. As with so many issues surrounding human cognition and behavior, simple answers are facile but inaccurate and will set our science back. The typology of 13 moderators I have proposed derives from modeling deception as a communication phenomenon, the properties of which can exacerbate or alleviate cognitive demands. The non-exhaustive collection of moderators includes: (1) sender memory demands, (2) sender motivation, (3) incentives and consequences, (4) discourse genre, (5) form of prevarication, (6) expected response length, (7) sanctioning of the deceit, (8) communication medium, (9) advance preparation, (10) recency of the incident/issue, (11) relationship among interlocutors (e.g., cooperative or adversarial), (12) relational familiarity, and (13) size of unit of analysis. I invite further formalization and empirical testing by other deception scholars to disentangle the effects of these significant moderators.