____________________________________________________ Record: 33 Title: Epilogue: Where are we going, and how do we get there? Subject(s): INTERPERSONAL communication -- Psychological aspects; DECEPTION -- Psychological aspects Source: Journal of Language & Social Psychology, Dec94, Vol. 13 Issue 4, p514, 5p Author(s): Riggio, Ronald E. Abstract: Focuses on the articles presented involving deception research. Theme and contributions of each article; Research on interpersonal deception; How to improve and advance research in deception. AN: 9703310616 ISSN: 0261-927X Note: T Database: Academic Search Elite EPILOGUE: WHERE ARE WE GOING, AND HOW DO WE GET THERE? As can be seen by the variety and scope of the articles in this special issue, research on interpersonal deception is advancing at a good pace and on a number of fronts. Included in this issue are topics as diverse as investigations of deception types, a categorization of practitioner strategies used in detecting real-world instances of deceit, analyses of verbal and nonverbal correlates of deception, and studies of deception in public and private settings. This JLSP issue, coupled with an upcoming special issue of Communication Theory, also on deception, provides ample evidence that deception research is alive and flourishing. In spite of this productivity, however, there is still a long way to go. Interpersonal deception is an extremely complex phenomenon--so complex, in fact, that some deception researchers (myself included) have been known to experience bouts of frustration and even anxiety attacks while attempting to design studies, or interpret data, in an effort to examine just some small aspect of deception. In the scheme of things, we have really only scratched the surface in terms of our understanding of interpersonal deception. So, what are some of the additional "scratchings" that have been made by this collection of papers? There are a few important themes and some significant contributions represented here. One theme that is consistent through most of these articles is the emphasis on deceptive interactions. Much of the early research on deception focused on the deceiver, without taking into account the fact that deception is an interactive process--a complex interplay between deceiver and detector. This concern with deceptive interaction is at the core of Burgoon and Buller's Interpersonal Deception Theory (IDT; Buller & Burgoon, 1994; Burgoon & Buller, 1994), and IDT has influenced several of the articles presented here. A second important theme represented in this collection of articles is the concern given to distinguishing among different types of deception. The empirical work presented here examines a variety of deception types, including falsification, equivocation, concealment, and misdirection (Buller, Burgoon, White, & Ebesu; Ebesu & Miller). In addition, a variety of real-world instances of deception are explored in some of the other articles, including lies in politics and other areas of the public domain (Robinson), patients deceiving their doctors (M. Burgoon, Callister & H,unsaker), and the lies told to attorneys, police officers, and other professionals who are trying to uncover the truth (Kalbfleisch). A third important theme in this special issue is the strong empirical emphasis of the authors. Most of these articles contain either detailed categorizations of deception types (or detection strategies, in the case of Kalbfleisch), or massive amounts of data on deceptive behavior analyzed in numerous interesting ways. Many of these reports have an archival quality about them. There is much here for the serious deception researcher to pore over, and a lot of it is work that is novel and ground-breaking. Let me briefly review some of the contributions made by each article. The article by Buller, Burgoon, White, and Ebesu, "Behavioral Profiles of Falsification, Equivocation, and Concealment," is a good example of the detailed empirical approach mentioned earlier. David Buller, Judee Burgoon, and their associates have created, through a series of studies funded by a U.S. Army Research Institute grant, an enormous data set investigating many aspects of interpersonal deception. This article presents a key piece of this work. The study's design clearly reflects the complex multivariate approach to studying deception that will lead to valuable advancements in our knowledge. Variables examined include different types of deception (three types), the effects of planning on deception (i.e., planned vs. spontaneous lies), the impact on the deceiver of the detector's suspiciousness, whether receivers are expert or novices at detecting deception, and whether the interactants are or are not related. This complex data set is intricately analyzed, and the article is chock-full of interesting findings. For instance, one important angle to the article is the emphasis on deception as impression management--affected by such key variables as whether the deceiver thinks the detector is suspicious, and whether the deception was planned or unplanned. An article by Burgoon, Buller, and Guerrero (to appear in the September 1995 issue of The Journal of Language of Social Psychology) represents another piece of the ambitious B & B deception project. This article focuses on the role of social skills in ability to deceive and detect deception. It is a nice extension of my lab's work on individual differences in deception (Riggio, Tucker, & Throckmorton, 1987; Riggio, Tucker, & Widaman, 1987). The authors replicate our findings that social skills are related to believability and deception success, and our (unpublished) failure to find that social skills (as measured by the Social Skills Inventory) are related to detection success. There has clearly been an imbalance in the deception literature, with greater attention given to studying the deceiver than the detector. Hopefully, this interesting article will inspire future research on the skills of the detector of deception. Buller, Burgoon, Buslig, and Roiger give us another replication, this time of the Bavelas, Black, Chovil, and Mullett (1990) research on equivocation. This and the previous article emphasize the importance of replication, particularly when dealing with a complex social phenomenon like deception. Importantly, IDT is used to frame this reanalysis/ replication. It is a nice blending of theory and empirical work. The article by Ebesu and Miller is an interesting study that extends research on verbal and nonverbal cues of deception by looking at how behavioral displays vary as a function of the type of deception. This article truly has an archival quality to it with the many hypotheses and research questions it addresses. Adding to the archival feel is a rather comprehensive literature review. The authors do a particularly good job of using IDT to help interpret and clarify their findings. The last three articles move us out of the laboratory to study real-world instances of deception and deception detection. The study by M. Burgoon, Callister, and Hunsaker explores deception in patient-physician interactions. Although there has been quite a bit of research on the physician-patient relationship, this is the first detailed empirical look at deception in that context. As a bonus, the authors have created a measurement instrument that could be used for future research in the area. Pamela Kalbfleisch's interesting and innovative work looks at strategies that professional detectors of deception (i.e., law enforcement officers, attorneys) use to draw out deception cues from deceivers. This work is important for two reasons: First, it is a novel, but valuable approach that starts at the far end of the spectrum--examining practitioner strategies--as a means of gaining insight into the near end of the spectrum--exploring the detection process. Second, Kalbfleisch does the groundwork for future researchers by assembling these detection strategies into a useful and meaningful framework. W. Peter Robinson concludes this series of articles by looking at the important, timely, and inherently interesting topic of lies in the public arena. Like Kalbfleisch, Robinson is breaking new ground here, and this work provides a nice base for researchers interested in exploring the lies committed by public officials--those instances of deception that can have widespread impact on multitudes of people. Research on Interpersonal Deception: Where Are We Going? As several of the studies in this issue illustrate, research on deception is moving away from a sole focus on the deceiver, to analysis of deceptive interactions--including examining both the deceiver and the detector/target, and the interchange between them. Unfortunately, even state-of-the-art deception research still has a very limited perspective of deceptive interactions, because most interactive studies of deception continue to take place in relatively artificial, laboratorylike conditions, with deceivers asked by experimenters to pose deception on cue. Detailed investigations of ongoing, natural deceptive interactions have simply not been done, often because of the logistical problems involved. This is not to say that we should discourage laboratory research on deception. There is still an enormous amount of work to be done in the lab. As the latter articles in this issue demonstrate, however, there needs to be increased attention given to deception out in the real world--interpersonal deception between loved ones, acquaintances, strangers, as well as deception that takes place in commerce, politics, and the exchange of services. One recent trend in deception research that is nicely demonstrated in this issue is distinguishing among different types of deception. In recent years, rather than lumping all forms of deception together, deception researchers are becoming more and more specific about the type of deception being studied. Routinely, this may mean not only talking about the type of lie, but also specifying other relevant characteristics, such as whether the lie was planned or spontaneous. Another future trend that is delineated in several of the articles in this issue is an increasing concern with the development of theories of deception, or the application of related interactional theories to help provide a framework for studying interpersonal deception. IDT is an important contribution in this area; however, there is still a shortage of theorizing guiding much of the deception research. Future Research on Interpersonal Deception: How Do We Get There? To adequately study deceptive interactions requires that researchers simultaneously examine the behavior of deceiver and detector/target to determine how the behavior of one interactant influences the behavior of the other. Technologically, this approach will likely require coordinated videotape systems, with cameras simultaneously recording both interactants. This is similar to the split-screen technique used by researchers studying interactional synchrony--where frontal shots of both interactants are presented to observers simultaneously on halves of the video monitor. A method such as this will allow researchers to study the displays of each interactant and the immediate reactions to the other's displays. Although I do not consider myself any sort of technological "seer," there will likely be some opportunities created by the explosion of advancements made in computer-video interfaces. It is likely that laboratory research on deception will benefit from the control afforded by interactive video technology. For example, a subject could conceivably arrive at the lab and carry on an interaction with a videotaped interactant who is preprogrammed to exhibit certain behaviors associated with deception or truthtelling to systematically study the subject's reaction to the cues. In short, we may be able to create interactions that model actual human interchanges but that are manipulated and controlled by the experimenter. To study deception in relationships or in long-term ongoing interactions, it might be important to seek out interactive situations that are conducive to this type of study, such as the patient-physician deceptions studied by Michael Burgoon and his associates. For instance, therapist-client interactions, or the triadic interactions that constitute marital counseling, might provide fertile ground for looking at ongoing relationships where deception occurs and can be observed in detail. Studies could be conducted of the veracity of salespersons' pitches and the gullibility of the buyers. Following Kalbfleisch's lead, perhaps permission could be obtained to videotape and study police interrogations (we can already see the actual arrests broadcast on network television), or actual court cases (also televised). There are likely many such possibilities. Finally, if research on deception is going to continue to advance at an increasing rate, it will require the training of new deception researchers. To that end, it is encouraging to see the many young deception researchers who are coauthors of articles published in this special issue. It is equally important that deception research be viewed by the scientific community and the community at large as important, meaningful, and worthy of study (and funding). This should heighten researchers' concerns with studying real-world deception--in government, in the courtroom, in the workplace, as well as at home. Deception is ubiquitous. It is one of the most complex forms of human interaction, and it affects us all. We really should know more about it. REFERENCES Bavelas, J. B., Black, A., Chovil, N., & Mullett, J. (1990). Equivocal communication. Newbury Park, CA: Sage. Buller, D. B., & Burgoon, J. K. (1994). Deception. In J. A. Daly & J. M. Wiemann (Eds.), Communicating strategically: Strategies in interpersonal communication (pp. 191-223). Hillsdale, NJ: Lawrence Erlbaum. Burgoon, J. K., & Buller, D. B. (1994). Interpersonal deception theory. Communication Theory. Riggio, R. E., Tucker, J., & Throckmorton, B. (1987). Social skills and deception ability. Personality and Social Psychology Bulletin, 13, 568-577. Riggio, R. E., Tucker, J., & Widaman, K. F. (1987). Verbal and nonverbal cues as mediators of deception ability. Journal of Nonverbal Behavior, 11, 126-145. ~~~~~~~~ By RONALD E. RIGGIO, California State University, Fullerton _________________ Copyright of Journal of Language & Social Psychology is the property of Sage Publications Inc. and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. Source: Journal of Language & Social Psychology, Dec94, Vol. 13 Issue 4, p514, 5p. Item Number: 9703310616 _________________________________________________________________ Record: 31 Title: The language of detecting deceit. Subject(s): DECEPTION -- Psychological aspects; INTERPERSONAL communication -- Psychological aspects Source: Journal of Language & Social Psychology, Dec94, Vol. 13 Issue 4, p469, 28p, 1 chart Author(s): Kalbfleisch, Pamela J. Abstract: Provides information on the study of language strategies that are used to detect deceptive communication in interpersonal interactions. Classification of the typology; Strategies and implementation tactics; Discussions on deception detection techniques; Conclusion. AN: 9703310614 ISSN: 0261-927X Note: T Database: Academic Search Elite THE LANGUAGE OF DETECTING DECEIT Techniques used to convince potential deceivers to tell the truth or to draw out information and cues to deception have long been part of the conventional wisdom of attorneys, police officers, investigative reporters, and other professionals who work with potential deceivers. A comprehensive search of this pragmatic literature yields myriad message strategies that these practitioners believe to be effective in obtaining accurate information and detecting deception. These diverse strategies were placed into a typology categorized by the ecumenical motivations facilitating truth-telling, increased information exchange, and cues to deceptive communication. Sample message tactics for implementing these overarching strategies are incorporated into this classification. This typology provides an organizational scheme for the diverse techniques drawn from pragmatic literature and can serve as a template or heuristic model for examining deception detection in interpersonal settings. Detecting deceptive communication has long been a concern of scholars and students of human behavior (see Kalbfleisch, 1992, and Zuckerman, DePaulo, & Rosenthal, 1981, for reviews). However, only recently have researchers begun to question the prevailing paradigms used to study deception and to encourage the development of research designed to capture the ongoing nature of deceptive interactions (Buller & Burgoon, 1994; Burgoon, 1989, 1992; Burgoon & Buller, 1994; Knapp, Cody, & Reardon, 1987). For years, the most popular research paradigm used to study deceptive communication has placed the deception detector in the role of observer. In this model, deceptive and truthful communication is presented as experimental stimuli from which observers are asked to discriminate (e.g., Ekman & Friesen, 1974; Miller, deTurck, & Kalbfleisch, 1983; Vrij, 1994). Other more recent paradigms have allowed the person detecting deceit to (a) ask the potential deceiver questions (Stiff, Kim, & Ramesh, 1992) or (b) use a set of designated probes prior to rendering a judgment of a potential deceiver's veracity (e.g., Buller, Strzyzewski, & Comstock, 1991; Buller, Strzyzewski, & Hunsaker, 1991). In general, these experimental interactions have been brief and typically limited to designated responses on the part of the person attempting to detect deceit. Measurement of the ability to detect deception in the standard research paradigm typically ranges between 45% and 70% accuracy with only a few ratings above or below this range (Kalbfleisch, 1985, 1990; Zuckerman et al., 1981). Most researchers using the more interactive paradigms have found similar accuracy results. An exception was the Buller, Strzyzewski, and Hunsaker (1991) study in which communicators who probed for deceit were less accurate than those who observed truthful and deceptive communication. The probing communicators in this study reached accuracy levels ranging from only 38% to 42%. Burgoon and Buller (1994) and Buller and Burgoon (1994) argue that these studies are constraining deceptive communication to follow a static format instead of allowing communicators to interact over a period of time. These researchers and their colleagues are analyzing deceptive communication from an interpersonal perspective across several levels of interaction. This is obviously the beginning of an arduous undertaking as they and other social scientists move a previously static area of research into an interactive framework. One of the many areas that needs to be considered in this endeavor is the analysis of ongoing attempts to detect deception in interpersonal interactions. Stiff et al. (1992) looked to interpersonal research in conflict to locate a method of coding questions asked of potential deceivers. They noted that the categories of encouragement ("OK," "I see," "Yeah"), restatements, and requests for elaboration, evaluation, and interpretation were useful ways to measure the level of cognitive involvement of a person trying to detect deceit. These five question types are, no doubt, a start in examining verbal strategies that can be used to stimulate a communicator to provide more information. Another milieu that may provide guidance in the study of interactive attempts to detect deceit is that of practitioners who are placed in the situation of detecting deception as part of their professional responsibilities. These people would include police investigators, employment interviewers, journalists, attorneys, medical personnel, and counseling psychologists. The professional literature of these practitioners contains extensive advice for spotting deceivers and detecting deception. Although, arguably, these practitioners function in more formalized environments than typical interpersonal settings, they have given a great deal of thought and practice to discovering deceit interactively. For example, the biography of William Burns-the famous detective who was the founder of the Burns Detective agency--contains an elaborate description of a case in which Burns was successful in eliciting the truth from a formerly deceitful person. Burns had been employed by a group of landowners to get to the bottom of a land fraud case in Arizona. His son, George, witnessed this encounter. Then gradually, almost imperceptibly, the veteran detective began winning admission after admission from the man. The technique he [Burns] was using consisted of making some outlandish charge, then arguing hotly when Schneider [his subject] denied it. Finally, as if in defeat and resignation, the senior Burns would say, "Well, at least this much is true . . . ." and offer some minor obvious fact that Schneider had no reason to deny. Relieved at having the major accusation dropped Schneider generally would agree. This became an established pattern. As George Burns listened in fascination, he heard the points to which Schneider was agreeing become steadily more significant. Before long, the man was admitting things he had angrily denied an hour or two earlier. (Caesar, 1968, p. 109) Similar techniques are described by Aubry and Caputo (1980) and Brady (1976). It seems reasonable to term this tactic exaggeration, in that the principal thrust recommended is to exaggerate an accusation then depend on the response to this exaggeration to be an admission to a lesser accusation. A more straightforward strategy is described by a psychiatrist in a narrative about assessing the mental competence of a person who seemed to be faking mental illness: When I overcame my anger long enough to tackle this defendant's resistance and talked about the silly answers, I told him that I did not want to tell the judge he was faking and that he should think about it. Then, surprisingly, he talked, looking me straight in the eye. "Of course" he knew the charges. The long rap sheet was a testament to his inability to stay out of trouble and to his lack of personal resources.... He talked for quite a while and then said "Gee, how did you get me to say all that? I wasn't going to talk." (Goldstein, 1989, p. 225) This direct approach is recommended by other practitioners as well (e.g., Buckwalter, 1983; Royal & Schutt, 1976) as sometimes being the best way to get a person to tell the truth. The professional literature of these practitioners frequently contains advice on detecting deceit ranging from accounts and narratives to lists of verbal tactics. Sometimes the authors give one or two tactics for detecting deception; other times, they will discuss several techniques. This literature is predominately based on the personal experiences of the practitioner authors. In general, the strategies presented in the pragmatic literature are directed toward the interactive detection of deceit. To be enacted, some tactics are based on the responses of a suspected deceiver (such as the exaggeration tactic). Other tactics are described for use singly or in combination (such as asking for elaboration or the use of silence). Additionally, some strategies are suggested as serving as antecedents to others, whereas further strategies may be described as last chance efforts for detecting deceit (such as the bluff). Ironically, there are some sources that discuss how best to deflect or counter several of these same strategies when they are used by others (cf. Brodsky's, 1991, guidelines for expert witnesses). The primary contribution of these practitioners to the study of interpersonal deception is most likely to be the provision of a framework for examining the language strategies used to detect deceit. It would appear that many of these suggested tactics can be used with varying levels of politeness or severity. For example, Blau (as quoted in Krieshok, 1987) warns the expert witnesses to be cautious of a lawyer who ". . .in a very soft, reasonable, sensible way, will try to get you to say the same thing in two different ways and thus impeach your testimony" (p. 71). This same repetition strategy could also be used in a much more hostile and aggressive manner (cf. Aubry & Caputo, 1980). In developing a classification for the diverse message strategies suggested by practitioners for detecting deceit, a search was made through sources listed in indexing services, library holdings, and books in print. From this search, 80 books and articles had specific message strategy suggestions. Each author's suggested tactics were listed separately, then examined for similarities to those proposed by other authors. Suggested techniques that were removed from consideration for this typology were (a) tactics that relied on two people working together to detect deception or gain additional information from a third person (e.g., good cop/bad cop) and (b) tactics that relied on contextual constraints for communicator response to questions (e.g., the closed-ended questions used in legal settings where respondents are compelled to provide restricted answers such as yes or no). Otherwise, all suggested message tactics that could be applied to an unrestricted conversation between two people in an interpersonal setting were considered for inclusion in this typology. The objective in creating this classification scheme was to capture the diversity of tactics available, while collapsing similar message tactics into reasonable categories. Additional inspection of these collapsed tactic groups revealed several different motivations for telling the truth or for revealing additional information in an interactive setting. The remaining tactics were then further grouped according to the primary strategies for encouraging truthful responses or providing additional information. The resulting 15-part typology, shown in Table 1, lists the primary strategies for detecting deceit along with the message tactics employed in each strategy. For strategies with only one suggested tactic, the strategy and the tactic share the same title. This typology provides a working template for the study of language strategies used to detect deceptive communication in ongoing interpersonal interactions. The following section describes the typology of message strategies and tactics as they have been applied in pragmatic settings. In this section the person using the deception detection message tactics is designated as the communicator, and the person who is the target of these strategies is designated as the suspected deceiver. Admittedly, the nomenclature of communicator and suspected deceiver might be better substituted with language that (a) reflects the communication contributions of both parties in interpersonal deception detection, and (b) suggests that recipients of message strategies designed to detect deceit not only be suspected deceivers but also suspected truthtellers. For the present study, the terms communicator and suspected deceiver were used because they are straightforward and not likely to be confused during the description of complex tactics. Labels such as prober and target, interviewer and interviewee, and the simple Communicator A and Communicator B can be substituted by the reader, as the present labels were selected only for convenience and clarity in explicating the message tactics. CLASSIFICATION OF THE TYPOLOGY The first four overarching strategies: (a) intimidation, (b) situational futility, (c) discomfort and relief, and (d) bluffing, all require the individual probing for deceit to exude an air of confidence about an individual's mendacity (Buckwalter, 1983; Mettler, 1977). These strategies all present a basic assumption that the person is lying. Working from this assumption they try to convince suspected deceivers to "'fess up" or to provide a more accurate picture. The success of these strategies is dependent upon suspected deceivers being aware that the person communicating with them believes they are lying (Inbau, Reid, & Buckley, 1986). STRATEGY I: INTIMIDATION Communicators using this strategy for detecting deception attempt to force individuals to admit to using deceit by intimidating them into telling the truth. Techniques for accomplishing this strategy range from stern, authoritative statements and questions to probes that resemble "third degree" tactics. Tactic 1: No nonsense. This tactic uses statements and questions that straightforwardly accuse an individual of being a liar (Gorden, 1980; Inbau et al., 1986; Moston & Stephenson, 1993; Yeschke, 1987). Examples of the no-nonsense strategy would be "Don't lie to me," "You are lying," and "Do you expect me to believe that?" Tactic 2: Criticism. This tactic employs negative evaluation to intimidate the suspected deceiver (Bailey & Rothblatt, 1978; Belli, 1963). The delivery of a negative evaluation by a communicator not only indicates disagreement or disapproval of statements or actions but also inherent in it is the implicit claim of the prerogative to appraise (Davis, 1971). These negative evaluation ploys can become denigrating, for example: questioning the judgment and faculties of another (Churchill, 1978), telling the person that he or she is unworthy (Aubry & Caputo, 1980), and laughing at the person (Arther & Caputo, 1959). Tactic 3: Indifference. An alternative tactic for connoting negativity and intimidation is for the communicator to feign indifference to the information presented by the suspect (Arther & Caputo, 1959). This can be supplemented with statements such as "Don't tell me any more" or "I'm not interested in anything you have to say in this matter." The indifference resembles the criticism ploy. It suggests that the person being questioned is not worth any further probes or examination. And, as with the negative evaluation techniques, the function is to motivate the person under suspicion to tell the truth so as to achieve a positive evaluation or at least discontinue the negative one. Tactic 4: Hammering. This technique of probing is described by Aubry and Caputo (1980) as a "single-minded, relentless pursuit" (p.227). It involves repeated use of either no-nonsense statements and questions, statements criticizing the suspect, or combinations of both. These messages can be delivered in a barrage with no pauses between the statements. This rapid fire of accusations and criticisms does not allow a respondent time to analyze or censor responses to the questions (Bailey & Rothblatt, 1978; Kestler, 1982). An example of how a communicator might use this strategy would be use of exclamations such as "You jerk" (criticism), "You are lying to me" (no nonsense), "You don't have an honest bone in your body" (criticism), and so on, allowing the suspected deceiver only brief moments to respond. This explosive series of extreme negativity can have an unnerving effect on the person questioned (Ehrlich, 1970; Mettler 1977). In legal settings, the Trial Diplomacy Journal (1981) indicates that it is useful for flustering and intimidating witnesses into providing information they originally were reluctant to disclose. This tactic may also be used to juxtapose diverse questions, in hopes of further catching a person off guard (Steller & Boychuk, 1992). In preparing expert witnesses for testimony, Brodsky (1991) indicates that this tactic can be particularly intimidating. STRATEGY II: SITUATIONAL FUTILITY This strategy relies less on personal assaults and focuses more heavily on emphasizing the futility and impending danger associated with continued deceit. Tactic 5: Unkept secret. This tactic emphasizes the futility of deceiving because the truth will eventually be known. For example, a person may be told that the truth will come out one day--maybe not today or tomorrow but, eventually, the truth will be known. The assumption is that continuing prevarication may seem futile if a person is made aware that he or she will be eventually found out anyway (Inbau et al., 1986). Tactic 6: Fait accompli. According to Aubry and Caputo (1980), the fait accompli tactic emphasizes that the respondent cannot alter history. The communicator using this tactic suggests that there is nothing that can be done to make a bad situation better and that continued deceit may only make it worse (Buckwalter, 1983). An example of this strategy would be pointing out that "What's done is done," "It can't be changed now"; "Don't dig a deeper hole for yourself by telling more lies to cover up." Tactic 7: Wages of sin. A more intensive use of this strategy embodies the depiction of various consequences that will result from continued deception on the part of the suspected deceiver (Buckwalter, 1983; Inbau et al., 1986). The idea behind this depiction would be to motivate the person to tell the truth now, because holding out on telling the truth might not only be futile and make the situation worse, but holding out might create an entire gamut of compounded problems for the suspected deceiver (Brady, 1976). Tactic 8: All alone. This strategy points out that a person is alone in their deception and that they are one person against everyone else. This further emphasizes the vulnerability of the individual and the futility of continued deceit (Aubry & Caputo 1980). STRATEGY III: DISCOMFORT AND RELIEF The belief that "confession is good for the soul" is well suited to describe the notion behind this strategy. This motivation for truthtelling is based on the idea that lying is an uncomfortable activity. Using this concept, which is similar to a Hull-Spence learning theory concept of a drive-producing state, communicators attempt to exacerbate a suspected deceiver's discomfort, hence increasing the drive state (Logan, 1959). With this increased discomfort (or drive), deceivers are thought to be more likely to attempt to reduce this discomfort by admitting that they have been lying. Tactic 9: Discomfort and relief. The tactic for implementing this strategy is accomplished in two steps. In the first step the seriousness of the lie is magnified or outlined by the communicator (Killam, 1977). The communicator may also try to make the suspected deceiver feel guilty (Moston & Stephenson, 1993). The second step of this strategy is to not allow this discomfort to be reduced through any alternate means other than telling the truth (Royal & Schutt, 1976). For example, attempts by a suspected deceiver to offer an explanation or excuse for actions can be countered with other possibilities that imply this suspected deceiver is lying (Kaiser, 1979). In interrogation settings, the suspect may even be seated in a chair that is straight backed and immobile (Arther & Caputo, 1959; Buckwalter, 1983). This functions to further restrict a person's ability to reduce discomfort by moving the chair back, thereby making the interaction less immediate in nature. Communicators using this tactic also have the option of increasing discomfort by moving or leaning closer to their suspects (Arther & Caputo, 1959; Gudjonsson, 1992; Kestler, 1982), thereby further inhibiting any nonverbal attempt at drive reduction through less immediacy. For the second step of this strategy to be effective, the communicator must offer a way to reduce the unpleasant drive that has developed. Having blocked other methods of reducing discomfort, the communicator may suggest that telling the truth will reduce the unpleasantness of continued deception (Buckwalter, 1983; Royal & Schutt 1976). STRATEGY IV: BLUFF This strategy capitalizes on fear of discovery. It relies on lying to a person to engender this fear and motivating the target of the lie to be truthful. Tactic 10: Evidence bluff. This bluff is accomplished by pointing out some fabricated evidence that the person has lied (DeLaduranty & Sullivan, 1980; Inbau et al., 1986; Killam, 1977). Statements such as "Don't tell me you didn't drive your car this week, your ashtray is filled with fresh cigarette stubs" or "Stop lying, you couldn't have been at work today, your boss said you never came in" are examples of ways of using an evidence bluff. Tactic 11: Imminent discovery. Although many bluffs rely on using deceitful evidence to catch deceit, an alternative is to engender fear of discovery by implying the possibility of the imminent uncovering of evidence that might impeach deceivers' credibility (Buckwalter, 1983). Communicators using this tactic point out that the suspected deceiver has a chance to tell the truth before such evidence has a chance to pop up; for example, "I have an appointment to speak with your boss in a few minutes, is there anything you would like to tell me first?" In the interrogation setting, this tactic might be further augmented by indicating that an accomplice might get away with the crime by implicating the suspected deceiver as the actual criminal (Arther & Caputo 1959; Inbau et al., 1986; Killam, 1977). This particular tactic was popularized in the social sciences through the use of the prisoners' dilemma games by researchers interested in studying conflict and cooperation (cf. Bixenstine & Wilson, 1963; Miller & Simmons, 1974). Essentially, this paradigm used game points to simulate the dilemma of two prisoners individually deciding to talk to the authorities or keep quiet contingent on whether or not they believed the other prisoner would stick to the story (thereby minimizing evidence) or talk (thereby providing evidence). The consequences for the decisions made by research participants were manipulated by allotted game points; the consequences for actual crime suspects are much less ephemeral. Tactic 12: Mum's the word. In a bluff, what is said is as important as what is not said. Dailey (1967), for example, cautions communicators not to disclose the extent of their knowledge. Specifically, suspected deceivers may actually be safer from discovery than they have been led to believe. Letting suspected deceivers know the extent of a communicator's knowledge may reduce this fear. Dailey (1957) further warns that information may be accidentally given away through the questions that are asked. The success of this bluff strategy relies on concealing the true extent of factual knowledge. A serious drawback with bluffing is that the communicator is using deceit to ascertain whether deception is occurring on the part of another. If individuals being probed become aware of the ploy, they are unlikely to feel a need to tell the truth. One of the inherent dangers of bluffing is that the bluff may be called, and further attempts to motivate a person to be truthful may be futile, if the communicator is caught bluffing (Aubry & Caputo, 1980; Kestler, 1982). Because of this danger, bluffing tactics are considered as last-resort attempts to elicit truthful information (Aubry & Caputo, 1980; Dailey, 1957). SUMMARY AND TRANSITION The strategies and implementation tactics that have been discussed so far place the person suspected of lying under duress (McDonald, 1963). Although these strategies may be effective in determining whether or not a person is being truthful, the severity of these strategies may compel a suspected deceiver to seek positive or stress-reducing responses from the communicator. Consequently, asking questions that assume deceit may yield admissions that conform to a communicator's suspicions, even if a deception has not occurred (Aubry & Caputo, 1980; Gudjonsson, 1992). These strategies may also be of negligible use when the person questioned has the option of simply discontinuing the interaction (Balinsky, 1978; Metzer, 1977). The following strategies for detecting deceptive communication focus on gaining the confidence and respect of the person being probed for deceit. The time-worn philosophy that "honey catches more flies than vinegar" is reflected in these strategies. The premise is that people will be more likely to disclose information when they trust and feel comfortable with the person with whom they are conversing (Balinsky, 1978; DeLaduranty & Sullivan, 1980; Kahn & Cannell, 1957; Rich, 1968; Schwartz, 1973; Stewart & Cash, 1974). This idea is recommended in all the bodies of literature that investigate strategies for discovering the truth. For example, Arther and Caputo (1959) feel that establishing rapport is so important that they implore police interrogators to find a likable quality in the person they are about to question, irrespective of the crime committed. These practitioners observe that people who wish to confide in someone do not go to enemies but, rather, to those individuals who they feel will understand and advise them (Dowling, 1979; Lipkin, 1974; Royal & Schutt, 1976; Walters, 1970). An image of sincerity, sympathy, impartiality, empathy, and politeness, in conjunction with a friendly attitude and genuine concern about the well-being of the person suspected of deceit, is a prerequisite for fostering a sense of rapport and confidence in which potentially negative information may be exchanged (Arther & Caputo, 1959; Kaiser, 1979; Lloyd-Bostock, 1989; Morris, 1973; Mulbar, 1951; Sayles & Strauss, 1981; Weber, 1981). J. McQuaig, P. McQuaig, and D. McQuaig (1981) and Molyneaux and Lane (1982) indicate that sincerity, once established, will pay off when convincing individuals to disclose negative information. This also allows people to present their side of the story (McQuaig et al., 1981; Stoeckle, 1987). Although many individuals probably will not disclose deceitful behavior simply because they find themselves in a warm, supportive environment, the creation of this environment is beneficial to establishing the truth in other ways. This climate may encourage these individuals to relax their defenses and provide information that may be used to secure the truth in the future (Ehrlich, 1970; Peskin, 1978). This is especially effective if the communicator does not indicate that prevarication is suspected. (Wellman, 1953). Another benefit of this approach, according to McQuaig et al. (1981), is that the greater the depth of understanding, the easier it will be to obtain truthful information from a person. Several strategies of detecting deception rely on the establishment of rapport before they can be implemented. These strategies are: (V) gentle prods, (VI) minimization, and (VII contradiction. STRATEGY V: GENTLE PRODS This strategy assumes that although the truth may come out simply as a result of positive rapport, individuals will probably need to be guided into providing the information necessary for the communicator to make an accurate assessment of veracity (Gorden, 1980). The tactics for implementing this strategy attempt to extend rapport and coax an individual into revealing information. Tactic 13: Encouragement. To encourage an individual to continue discussing information that may yield fruitful cues for detecting deception, a communicator may encourage exploration of a certain point with positive exclamations such as "I hear you!" "Yes?" "Aha!" and "Now really!" (Ivey & Authier, 1978; Kaiser, 1979; Kenny & Moore, 1979; Molyneaux & Lane, 1982; Van Meter, 1973). Nonverbal noises and gestures such as nodding and smiling may also be used to encourage an individual to continue a desired story line (Gorden, 1980; Johnson, 1988). Tactic 14: Elaboration. Encouragement may be coupled with requests for immediate elaboration (Balinsky, 1978; Kaiser, 1979). This elaboration tactic directly asks an individual to elaborate upon the topic being discussed (Edinburg, Zinberg, & Kelman, 1975; Kahn & Cannell, 1957; MacKinnon & Michels, 1971; Schwartz, 1973). For example, an individual may be asked to tell a communicator more about a certain issue or to outline other germane ideas or feelings in reference to this issue (Gorden, 1980). The elaboration tactic encourages a person to flesh out his or her perspective (Moston & Stephenson, 1992). When a communicator must ask questions that appear to bring up unsavory or negative information, several prodding tactics may be employed to maintain rapport, even in light of these questions and their implications. Tactic 16: Diffusion of responsibility. This tactic diffuses any personal responsibility for questions that might be asked (Brady, 1976). For example, a communicator may avoid personal responsibility for a question by cushioning the request with words, such as "people are saying that you...." Hence, the communicator is no longer asking the question, but rather an interested public is asking the question (Brady, 1976). Weber (1981) further suggests that the following ideas be conveyed to diffuse a communicator's personal responsibility: (a) the question is a requirement of the communicator's role and not a personal interest, and (b) the situation demands that this question be asked and in this manner. By reducing personal responsibility for potentially negative questions, the communicator may still retain a positive relationship with the respondent and avoid becoming a target against which the suspected deceiver can focus negative affect (Davis, 1971). Tactic 16: Just having fun. A communicator may also ask negative questions and possibly maintain rapport by implying that a potentially threatening question is a playful one (Banaka, 1971; Brady, 1976). For example, "May I play the devil's advocate for a moment?" is a method for neutralizing the harshness of a request for information. Tactic 17: Praise. Another tactic for asking potentially threatening questions while maintaining rapport is to begin these questions with praise for the respondent (Brady, 1976). For example, a communicator needing to determine if a horse trainer lied about the validity of some registration papers could employ the following type of question: "Hi Slim! Everyone says that you are the best in this neck of the woods, but Joe X over there says that your horses were never officially registered with the association. What are the grounds for his claim?" Praise for the person suspected of deceit may also be employed without being followed by a negative question (Davis, 1971; Inbau et al., 1986). This praise may be used to indicate that whatever behavior or act that the communicator suspects is being covered up is actually a positive attribute or action (Aubry & Caputo, 1980). This praise may spur the suspected deceiver into expanding on the topic, perhaps providing clues to the occurrence of prevarication. An example of this application to the horse trainer situation would be "Hi Slim! I can't believe how you have been pulling this thing off. You must be a genius to be able to fool the Mountain Pleasure Horse Association with your doctored up registration papers!" The use of positive evaluation may also develop a relationship that subtly places the communicator in the role of evaluator and the suspected deceiver in the role of the person being evaluated. By praising the person being probed, the communicator may subtly assert the prerogative to evaluate (Davis, 1971). In this situation, the suspected deceiver is placed in the role of pleasing the information seeker and hence may provide the information the communicator has sought. By using the positive evaluation, as opposed to the negative evaluation described in Strategy I, Tactic 2, the communicator may continue to keep the interaction in a positive light and further maximize the relationship (Arther & Caputo, 1959; Inbau et al., 1986). In the tactics discussed for employing this method of gentle prods, the communicator primarily attempts to build or maintain the sense of positive rapport that has been established. Suspected deceivers are guided by simple exclamations of encouragement or requests for expansion. In none of these tactics are the suspected deceivers alerted that the communicator actually believes they are lying or could be lying. Any such suspicion is passed off as being attributable to an outside force. As in the gentle prods, the following strategies also attempt to maintain a positive sense of rapport with the suspected deceiver. However, they differ from the gentle prods in that the communicator makes the suspected deceiver aware that prevarication is suspected. These strategies also imply greater personal responsibility for the questions that are asked and statements that are made. STRATEGY VI: MINIMIZATION This strategy attempts to facilitate any admission to deceptive behavior. It is composed of face-saving tactics that play down the significance of a deceptive performance and the suspected deceiver's responsibility for these actions. Tactic 18: Excuses. One face-saving tactic is to offer excuses for an individual's motivation to deceive (Bussey, Lee, & Grimbeek, 1993; MacDonald, 1987; Schwartz, 1973). For example, a communicator may suggest a morally acceptable reason for why the suspected deceiver has been lying. Actions that may have the potential of being concealed with the lies may also be given a less reprehensible interpretation and thereby make them easier to discuss (Inbau et al., 1986). For example, a communicator could make statements excusing behavior such as "Lying about your small annual salary is understandable in these days of inflation" or "The temptation was just too great for you to not take advantage of the situation." Tactic 19: It's not so bad. Another alternative is to reduce the significance of an action (Dailey, 1957). This softening or lessening of an action may be accomplished by playing down its seriousness, lessening the degree of the suspected deceiver's participation or involvement, or mitigating the degree of impact of the action on others (Buckwalter, 1983; Inbau et al., 1986). Statements such as "Don't worry, a few fibs are no big deal" or. "They have so much money they probably won't even miss what was taken" are examples of this tactic. Tactic 20: Others have done worse. Minimization may also be accomplished by indicating that anyone else would have done the same thing, perhaps to a far worse and more grievous extent than whatever the suspected deceiver has done (Aubry & Caputo, 1980; Buckwalter, 1983; Inbau et al., 1986; Metzer, 1977; Royal & Schutt, 1976). This tactic can be further augmented to minimize the significance of the lie or of the action concealed by indicating that other people have done things that are far worse than what the suspected deceiver has done (Aubry & Caputo, 1980; Inbau et al., 1986). Tactic 21: Blaming. A communicator may try to foster a face-saving climate for an individual by shifting the blame to someone else, sometimes even the victim of an action or lie (Arthur & Caputo, 1959; Dailey, 1957; Killam, 1977; Yeschke, 1987). This condemnation of an alternate source creates a situation in which admitting to prevarication or undesirable actions is an indictment of an external person or thing and not an admission of guilt or responsibility by the person who is responsible (Buckwalter, 1983). SUMMARY The two strategies discussed above share the assumption that people will be more likely to be truthful in a warm supportive climate than in a hostile one. Advocates of these strategies indicate that just as much information may be learned from a person who has a sense of security as from one who is being antagonized and harried (Blumenkopf, 1981; Ehrlich, 1970). Further, Bussey et al. (1993) report that individuals perceiving less censure will be more likely to respond truthfully than those perceiving greater censure for truthfulness. An additional notion concerning the utility of these strategies is that one can always become intimidating and negative after trying to gain information in a pleasant climate (Gorden, 1980; Schwartz, 1973; Weber, 1981). Practitioners emphasize that it is extremely difficult to attempt to probe for deceit in a supportive and positive manner after establishing a negative environment (Kessler, 1982; Royal & Schutt, 1976). STRATEGY VII: CONTRADICTION This strategy follows the pattern suggested by Royal and Schutt (1976) and Kestler (1982). It begins rather benignly by setting up an atmosphere of trust and rapport, allowing individuals to "tell their stories" and thereby satisfying the need to "tell their side" (Buckwalter, 1983; Kestler, 1982; Steller & Boychuk, 1992). During the suspected deceivers' presentations of their scenarios, gentle prodding tactics may be employed to encourage these individuals to continue their story or to elaborate on specific points. However, unlike the gentle prods, the communicator does not expect the speaker to reveal the truth because of positive rapport and careful guidance (Gorden, 1980; Peskin, 1978). Rather, the specific information used in this strategy is any form of inconsistencies or contradictions in the account provided by the suspected deceiver (McGough, 1992; Reiser & Schroder, 1980). Often these inconsistencies are not immediately pointed out to the speaker but, rather, are allowed to build up. The more inconsistencies and contradictions that pile up, the stronger the amount of evidence available to the communicator (Tierney, 1970). The assumption behind this method is that if the speaker is being deceptive, then a consistent presentation may not be possible, especially when one must continue to build lies as one speaks (Erlich, 1970; Royal & Schutt, 1976; Wellman, 1953). Tactic 22: Buildup of lies. In this technique, the inconsistencies and contradictions are brought to the speaker's attention after the story is finished by pointing out contradictory statements. The suspect is asked to explain these contradictions (Bell), 1963; Gallagher, 1962; Ianuzzi, 1982; La Forge & Henderson, 1990; Morrison, 1993; Robbins, 1980; Schwartz, 1973). Tactic 23: No explanations allowed. If several contradictions have been noted by a communicator, any explanation by the suspected deceiver for the occurrence of one set of contradictions can be followed up with the presentation of further contradictions (MacDonald, 1987; Royal & Schutt, 1976; Wellman, 1963). Built up in this fashion, the communicator may even be able to contradict the suspected deceiver's explanations with statements previously made. (Bailey & Rothblatt, 1978). Tactic 24: Repetition. If contradictions have not appeared in the initial story presented by a suspected deceiver, the communicator may attempt to draw contradictions out by repeating questions to the suspect and listening for significant differences in the speaker's response to these questions (Aubry & Caputo, 1980; M. Brennan & S. Brennan, 1989; Krieshok, 1987; Morrison, 1993; Moston & Stephenson, 1993: Royal & Schutt, 1976; Wellman, 1953). Tactic 25: Compare and contrast. Another alternative when contradictions have not occurred in the initial story is for the communicator to ask the suspected deceiver to compare and contrast some of the issues presented (Bailey & Rothblatt, 1978; Brennan & Brennan, 1989; Morrison, 1993; Younger, 1982). From these comparisons, inconsistencies may become apparent and these contradictions can be pointed out to the suspected deceiver by the communicator (McQuaig et al., 1981; Royal & Schutt, 1976). Tactic 26: Provocation. Another method of seeking contradiction is for the communicator to provoke the suspected deceiver into justifying actions or statements, then follow up on the weak points of the response to provocation (Kaiser, 1979; MacDonald, 1987; Rothblatt, 1982; Tierney, 1970). Provoking suspected deceivers in this way may cause them to become flustered and "slip up" on the picture of reality that has previously been presented (Tierney, 1970; Weber, 1981). Practitioners of law have often cited the presence of inconsistencies and contradictions in testimony as one of the most powerful ways to impeach a speaker's credibility (Blumenkopf, 1981; Carter, 1980; Younger, 1982; Wellman, 1953). In a situation where there is little evidence of innocence or guilt, inconsistent testimony may be the only clue that a less than accurate picture has been presented. Tactic 27: Question inconsistencies as they appear. Although the contradiction strategy, in general, requires a prior rapport with the speaker, some communicators prefer not to wait until a story is complete to point out inconsistencies and contradictions (Barthold, 1975; Blumenkopf, 1981; Brennan & Brennan, 1989; Buckwalter, 1983; Callahan & Bramble, 1983; Morrison, 1993; Moston & Stephenson, 1993). However, this tactic does not allow the communicator to gather as much information about the suspected deceiver. It may be most useful when information about the suspected deceiver is already known, and this person's story is not yielding new information (Bailey & Rothblatt, 1978; Gorden, 1980). As with the contradiction strategy, the next four categories of strategies rely on trapping a person in a situation where, as a result of the trap, the respondent may assume the communicator obviously knows of the deceit. STRATEGY VII: ALTERED INFORMATION A communicator using this strategy uses altered information to either trick a suspected deceiver into revealing involvement in deceit or to signal to the communicator that the suspected deceiver is actually telling the truth. Tactic 28: Exaggeration. With this tactic the communicator might exaggerate a suspected deceiver's level of deception, involvement in a concealed activity, or motive for deceiving (Aubry & Caputo, 1980; Brady, 1976; Caesar, 1968). For example, in a situation where someone has lied about involvement in an event, a message designed to imply the communicator believes the suspected deceiver's involvement is much more extensive than it actually is that may motivate an explanation that reveals the true extent of involvement (Brady, 1976). Exaggeration may also be used indirectly to trick a suspected deceiver into providing truthful information. For example, instead of asking a person directly if he has ever sold term papers to students, a communicator may pose the following type of question: Probe: Is it true that some students can get an A in a class by simply paying a graduate student $400 for a prewritten term paper? If the person questioned is actually an undergraduate or actually charges only $50 per paper, he or she may be tempted to voice one of the following responses: Response: Oh, no, not that much; it is usually only $50 or so. Response: Oh, no, graduate students have never been in on the term paper scam. Tactic 29: Embedded discovery. In employing this tactic, the communicator asks a suspected deceiver a question containing incorrect information. The rationale behind this tactic is that the correction of incorrect information may signal that a person is actually telling the truth (Churchill, 1978). The following example, culled from an old western movie, illustrates how embedded discovery can be used to verify a person's statements (Churchill, 1978): A U.S. marshall from Wyoming is visiting the sheriff of Tucson. He wants the sheriff to help him locate a band of outlaws that he has followed from Wyoming to Arizona. To decide whether or not he really is the marshall he claims to be, the sheriff asks the following questions: Sheriff: Do you know John Smith, the U.S. Marshall next to you in Idaho? Marshall: Yes I do. Sheriff: Has he recovered from the bullet wound in his arm yet? Marshall: Yes, he has, but it was in his leg, not his arm. Sheriff: Okay, you're the marshall you say you are. Since the sheriff knew that the bullet wound was really in Smith's leg (his question was a trap question), he accepted the man as the U.S. marshall he claimed to be. (pp. 54-55) A communicator may also use embedded discovery to confuse a suspected deceiver by slightly altering information that was provided earlier. This altered information can be repeated back to the suspect in an attempt to see if this person will try and correct it (Aubry & Caputo, 1980; Bailey & Rothblatt, 1978; Buckwalter, 1983). For example, a person who indicates that he or she returned to town on the tenth of the month may be asked if it is true that they returned to town on the eleventh. Churchill (1978) has pointed out problems in using the embedded discovery method for deception detection. First, the presentation of incorrect information does not necessarily mean that an innocent or guilty party will correct it. Second, the respondent to whom the question is addressed may perceive that the communicator wants an answer to the question and not a correction of its content. Third, the suspected deceiver may outguess the communicator's intent and avoid being trapped. STRATEGY IX: A CHINK IN THE DEFENSE This is a two-staged strategy in which the communicator first gains a foothold in the account presented by a suspected deceiver, then uses this foothold to implicate an individual as having deceived (Inbau et al., 1986). The basic concept behind this strategy is that if the person has lied about one aspect of behavior, then he or she is likely to have been lying about the entire matter. Tactic 30: A chink in the defense. In the first stage of this tactic, the communicator tries to obtain an admission that the suspected deceiver has lied about one small part of whatever the communicator believes the person to be lying about (Bailey & Rothblatt, 1978; Inbau et al., 1986). Some wily communicators may even try to get an individual to admit to thinking about participating in a small indiscretion (Inbau et al., 1986). The second stage of this tactic is implemented once an individual has admitted to lying or to considering lying about a small aspect of the account (Buckwalter, 1983). With this admission, the communicator then has leverage to determine if this individual has lied about the entire matter at hand. In the second stage of this tactic, the communicator asks questions and makes statements that extrapolate from the admission that has already been achieved (Kestler, 1982). For example, a pet-sitter who has confessed to slight neglect of a bird could be told that, "Since you lied about not freshening the parakeet's water, you must also be lying about how it disappeared. If you lied about one thing, what guarantee do I have that you haven't lied about others?" STRATEGY X. SELF-DISCLOSURE In this strategy the information seeker relies on the norm of reciprocity to gain information on whether a suspected deceiver is lying or telling the truth (Brady, 1976). Tactic 31: Self-disclosure. In using the reciprocity norm, a communicator reveals things about him or herself to the suspected deceiver (Gorden, 1980; Johnson, 1988). The expectation of this tactic is that as a result of this self-disclosure, the suspect will, in turn, reveal personal information, some of which may indicate whether or not they have been deceptive (Brady, 1976). STRATEGY XI: POINT OUT DECEPTION CUES The idea behind this strategy is to shake potential deceivers' confidence in their image management, and hence lead them to believe that they are not effectively concealing deception. Tactic 32: Point out deception cues. The tactic for using this strategy is for the communicator to point out to a person that he or she is exhibiting certain physical manifestations that are indicative of deception, such as sweating or voice change (Aubry & Caputo, 1980; Moston & Stephenson, 1993; Royal & Schutt, 1976). This strategy does not rely on the actual correlation of physical changes with deceptive communication but instead relies on the suspected person believing that these cues are actually associated with lying (Inbau et al., 1986; Moston & Stephenson, 1993). Therefore, any obvious physiological effect that is noted by the communicator may be used to confront an individual (Buckwalter, 1983; Killam, 1977). The next two strategies rely on fostering the attitude that telling the truth is the best thing for the suspected deceiver to do in this situation. STRATEGY XII: CONCERN In the strategy of concern, telling the truth is presented as the best alternative because the communicator is concerned with the suspected deceiver's well-being (Lloyd-Bostock, 1989). Tactic 33: You are important to me. In this tactic, concern may be shown by the communicator saying that the suspected deceiver is someone special or someone whom the communicator has come to care for and to admire. Or, if these statements do not appear to be appropriate, the suspected deceiver could be told that there is a strong resemblance between him or her and someone who is very close to the communicator, such as a brother, sister, mother, father, or friend (Inbau et al., 1986). If possible, the communicator may attempt to assume a parental role with the suspected deceiver. This may be accomplished by inferring that the individual reminds the communicator of his or her child, a younger sibling, or a relative. In this instance, care and concern is not only being shown as is in the examples above but, by assuming the parental image, deception may become more difficult for the respondent, especially if lying to one's parents is a difficult thing for this person to carry out (Royal & Schutt, 1976). Tactic 34: Empathy. Communicators using this technique indicate that they can understand how the person feels (Aubry & Caputo, 1980; Yeschke, 1987). Communicators employing this tactic might suggest that they also have felt the need to prevaricate and have done so in the past. Communicators might also indicate that they would probably be responding in a similar manner if they were suspected of lying. By using this tactic, the presence of a kindred soul who is concerned may facilitate an admission of deceit (Aubry & Caputo, 1980). A perception of empathy may also be bolstered with statements indicating general concern and sympathy, such as "I understand why you would want to hold back the facts." STRATEGY XIII: KEEPING THE STATUS QUO The unifying motivation underlying these two tactics is to admonish an individual to be truthful to retain his or her current status in life. Tactic 35: What will people think ? A communicator may indicate to the suspected deceiver that his or her status is in danger if others find out about their deceit, especially if these others are people who are significant to the suspected deceiver (Aubry & Caputo, 1980; Logan, 1959). The communicator in this context does not need to threaten the suspected deceiver with telling these significant people; rather, this strategy is based on making suspects see how upset and hurt others would be if they knew about the deception (Buckwalter, 1983). Tactic 36: Appeal to pride. Another alternative for these appeals is to convince suspects that their own self-concept will be altered if the perpetration of deception continues. This may be accomplished by appealing to a suspected deceiver's pride (Arther & Caputo, 1959; Caesar, 1968). A communicator may point out that the suspect appears to be a good and decent person, or appears to be a kind and considerate person, or whatever type of person the communicator perceives the suspect might consider herself or himself to be. Then the communicator may point out the incongruity between the suspect's behavior and the person that they appear to be. The implication here is that the individual cannot continue to lie and realistically maintain a positive self-image (Buckwalter, 1983). The final two strategies are quite straightforward. Communicators using the tactics from both of these strategies show that they doubt a suspect's veracity. The difference in these final strategies is in the nature of the directives to the suspected deceiver. STRATEGY XIV: DIRECT APPROACH In this strategy, the communicator directly admonishes a suspected deceiver to tell the truth (MacHovec, 1989). The suspect is not confronted with an accusation of deceit; rather, the person is simply asked to provide an accurate picture (Goldstein, 1989). Tactic 37: Direct approach. This tactic eschews any attempts to motivate the disclosure of the truth other than by the underlying moral motivation that telling the truth is a desirable thing (Buckwalter, 1983; Gorden, 1980). This tactic can be accomplished with a number of statements such as "Simply tell me the truth" or "Let's be honest here." The communicator may also employ this tactic by stating, "Surely you have no objection to discussing the truth about this occurrence . . . with me?" (Buckwalter, 1983, p. 106). The intent of the direct approach is that the suspected deceiver should see how obvious his or her lies are and should therefore take responsibility for them (Aubry & Caputo, 1980; Kaiser, 1979; Royal & Schutt, 1976). STRATEGY XV: SILENCE This strategy is also straightforward in nature. The motivating concept behind this strategy is the creation of a verbal vacuum that an individual will find uncomfortable and will attempt to fill (Kelner, 1981). Optimally, the individual will fill this vacuum with information that will help in determining veracity, or with an actual admission of responsibility for deceptive communication (Bailey & Rothblatt, 1978; Gorden, 1980; Kestler, 1982). Tactic 38: Silence. In using this tactic, the communicator maintains silence after a person has said or done something that the communicator believes is indicative of deceit (Ivey & Authier, 1978). Aubry and Caputo (1980) suggest that an effective way to maximize this strategy is for the communicator to look directly in the suspect's eyes while maintaining this uncomfortable silence. Finally, Moston and Stephenson (1993) suggest pausing after asking an important question, which may cause the person being questioned to change an answer or elaborate on the answer provided. DISCUSSION This classification of interactive strategies and language tactics represents an initial attempt to develop a template for analyzing deception detection in ongoing interactions. Practitioners who attempt to detect deception as part of their professions have given exhaustive consideration to the issue of detecting deceit in ongoing interactions. These practitioners are not hesitant to point out that a suspicion of deceit may not be confirmed or disconfirmed with the first message tactic, or even the first set of tactics (DeLaduranty & Sullivan, 1980, Hatherill, 1971; Mettler, 1977; Reiser & Schroder, 1980; Roblee & McKechnie, 1981). Bingham, Moore and Gustad (1959) suggest that lie detection may always be part of a gradual process of accumulating information from which to make a decision. Additionally, Buckwalter (1983) notes that part of the reason for the proliferation of deception detection tactics is the difficulty in applying the same strategy across different types of individuals, lies, and situations. Additionally, these practitioners suggest that nonverbal behavior may be used to facilitate or emphasize the messages tactics used while detecting deceit. Specific references to the use of nonverbal behaviors are readily apparent in the descriptions of violating personal space as a method of intimidation (Brodsky, 1991) or increasing discomfort (Arther & Caputo, 1959; Buckwalter, 1983). Additionally, vocalic cues, nods, and smiles are suggested as techniques to accompany the verbal encouragement provided to prompt the suspected deceiver into talking to the communicator (Gorden, 1980; Johnson, 1988). In reviewing the tactics presented in this typology, it is easy to see how other techniques might be augmented through skillful use of nonverbal cues. For example, a difficult question might be mitigated with a warm smile, or the use of silence might be all the more discomforting when accompanied by an unwavering stare. One must keep in mind, when considering these strategies and tactics, that they are primarily the result of techniques that practitioners have learned from experience or that have been passed down to them from others working in their field. These techniques are not the result of scientific study, but are rather the result of gut feelings, anecdote, myth, and tradition. Nevertheless, their existence is clear from scholars who have studied the use of these tactics in police interrogations (Irving, 1980; Irving & McKenzie, 1989), and who have studied issues such as facework and politeness in legal settings (Penman, 1990). However, with the exception of examinations of the effectiveness of intimidation tactics and use of evidence in eliciting confessions to crimes (e.g., Moston & Stephenson, 1992), the actual effectiveness of these strategies in detecting deception is unknown. Nevertheless, the strategies and tactics used by these practitioners may provide a template for beginning to explore interactive attempts to detect deception. Clearly, much more research is needed to truly understand the language of detecting deceit. Laboratory experiments in research settings may be one of the best ways to actually determine if these strategies are effective in detecting deceit. One of the difficulties in scientifically studying deception detection in the realm of the practitioner is that researchers are limited to assessing only the strategies that result in catching liars who fail in their prevarication. In pragmatic settings, it is impossible to determine how many lies are successfully perpetrated, or how many "innocents" may actually confess to crimes or unwittingly fall into veracity traps. Laboratory studies of these interactive deception detection techniques should allow more controlled assessments to take place. An initial investigation of this strategy usage in my research program suggests that the average "naive" communicator can draw upon these strategies when asked to determine if he or she is being told lies or the truth by another conversational participant. Of 356 college students who were asked to determine if the person they were talking to was lying or telling the truth, 326 used several different combinations of these pragmatic techniques to assist them in detecting deception. These communicators were not coached in strategy usage; instead, they were simply given 15 minutes to determine if their conversational partner was lying or telling the truth. These participants achieved an overall accuracy rate of 61%, with accuracy varying with condition and types of strategies employed. Strategies and tactics used by communicators in this introductory laboratory inquiry were coded by research assistants from tapes of these ongoing conversations. Coders memorized the typology of strategies and enactment tactics and were individually tested on their ability to provide examples of each of these strategies and tactics, and to identify the usage of these interactive techniques. These research assistants then coded the videotapes together, so they could reach agreement on the strategies and tactics that they were observing. The interactive techniques were coded by a communicator speaking throughout the length of the conversations (15 minutes or less). At the conclusion of coding all the speaking turns for each communicator, the coders made an overall assessment of the primary strategy used in the conversation (if such a judgment could be rendered). For example, in this initial study, communicators who were accurate in their interactive attempts to detect deceit most frequently used the strategy of contradiction as their overall strategy for detecting deception. Of the accurate communicators in this study, 42% used this strategy as their overall deception detection technique. Additionally, although the contradiction strategy was judged by coders to be the overall strategy used by these accurate communicators, other strategies and tactics were also used by communicators throughout the course of their conversations. These strategies and tactics were recorded by the coders at the points in the conversation in which they occurred. Obviously, the discussion of this laboratory study is only a thumbnail sketch of this initial research effort to study the language of detecting deceit. The data described are still under analysis. The brief report of this study will hopefully illustrate how this typology may be used in understanding interactive deception detection. Because most prior research limits and restricts research participants in detecting deception, scholars interested in studying deception detection in interactions need to explore more interactive environments. The typology presented in this article is offered as a template for beginning to learn about how communicators use language strategies to detect deceptive communication. At first blush, some of these strategies may seem foreign to interpersonal communication; however, a closer examination may uncover interpersonal analogues to these pragmatic situations. The intensity of a spouse's attempts to determine if there is another woman or man in a loved one's life may rival a police detective's attempts to determine if a crime suspect has something to hide. A college student trying to determine if a potential roommate has "stiffed" other roommates in the past may be similar in tactic usage to an employment interviewer's attempts to discern a prospective employee's past work history. Although differences in pragmatic and interpersonal deception detection certainly may exist, the strategies of the language of detecting deceit may be similar. After all, the naive communicators in the study described in this discussion were able to employ a variety of interactive deception detection strategies without coaching from a pragmatic mentor, or years of formal training and practice. It is possible that life itself presents the average communicator with more than enough opportunities and incentives for learning how to identify the truth and uncover deceit. Table 1 Typology of Overarching Strategies Together With Interaction Tactics I. Intimidation 1. No nonsense 2. Criticism 3. Indifference 4. Hammering II. Situational futility 5. Unkept secret 6. Fait accompli 7. Wages alone 8. All alone III. Discomfort and relief 9. Discomfort and relief IV. Bluff 10. Evidence bluff 11. Imminent discovery 12. Mum's the word V. Gentle prods 13. Encouragement 14. Elaboration 15. Diffusion of responsibility 16. Just having fun 17. Praise VI. Minimization 18. Excuses 19. It's not so bad 20. Others have done worse 21. Blaming VII. Contradiction 22. Buildup of lies 23. No explanations allowed 24. Repetition 25. Compare and contrast 26. Provocation 27. Question inconsistencies as they appear VIII. Altered information 28. Exaggeration 29. Embedded discovery IX. A chink in the defense 30. A chink in the defense X. Self-disclosure 31. Self-disclosure XI. Point out deception cues 32. Point of deception cues XII. Concern 33. You are important to me 34. Empathy XIII. Keeping the status quo 35. What will people think? 36. Appeal to pride XIV. Direct approach 37. Direct approach XV. Silence 38. Silence REFERENCES Arther, R. O., & Caputo, R. R. (1959). Interrogation for investigators. New York: William C. Copp. Aubry, A. S., Jr., & Caputo, R. R. (1980). Criminal interrogation (3rd ed.). Springfield, IL: Charles C Thomas. Bailey, F. L., & Rothblatt, H. B. (1978). Cross-examination in criminal trials. Rochester, NY: The Lawyers Co-operative. Balinsky, B. (1978). Improving personnel selection through effective interviewing: Essentials for management. New Rochelle, NY: Martin M. Bruce. Barthold, W. (1975). Attorney's guide to effective discovery techniques. Englewood Cliffs, NJ: Prentice-Hall. Belli, M. (1963). Modern trials. Indianapolis, IN: Bobbs-Merrill. Bingham, W. V D., Moore, B. V., & Gustad, J. W. (1959). How to interview (4th ed.). New York: Harper & Brothers. Bixenstine, V. E., & Wilson, K. V. (1963). Effects of level of cooperative choice by the other player on choices in a prisoner's dilemma game: II. Journal of Abnormal and Social Psychology, 67, 139-147. Blumenkopf, J. S. (1981). Depositional strategy and tactics. American Journal of Trial Advocacy, 5, 231-251. Brady, J. (1976). The craft of interviewing. Cincinnati, OH: Writers Digest. Brennan, M., & Brennan, S. E. (1989). Strange language: Child victims under cross-examination. Wagga Wagga, NSW, Australia: Riverina Murray Institute of Higher Education. Brodsky, S. L. (1991). Testifying in court: Guidelines and maxims for the expert witness. Washington, DC: American Psychological Association. Buckwalter, A. (1983). Interviews and interrogatories. Boston: Butterworth. Buller, D. B., & Burgoon, J. K. (1994). Deception: Strategic and nonstrategic communication. In J. A. Daly & J. M. Wiemann (Eds.), Strategic interpersonal communication (pp. 191-223). Hillsdale, NJ: Lawrence Erlbaum. Buller, D. B., Strzyzewski, K. D., & Comstock, J. (1991). Interpersonal deception: I. Deceivers' reactions to receivers' suspicions and probing. Communication Monographs, 58, 1-24. Buller, D. B., Strzyzewski, K. D., & Hunsaker, F. G. (1991). Interpersonal deception: II. The inferiority of conversational participants as deception detectors. Communication Monographs, 58, 25-40. Burgoon, J. K. (1989, May). Toward a processual view of interpersonal deception. Paper presented to the annual meeting of the International Communication Association, San Francisco. Burgoon, J. K. (1992, November). Applying an interpersonal communication perspective to deception: Effects of suspicion, deceit, and relational familiarity on perceived communication. Paper presented to the annual meeting of the Speech Communication Association, Chicago. Burgoon, J. K. & Buller, D. B. (1994). Interpersonal deception: III. Effects of deceit on perceived communication and nonverbal behavior dynamics. Journal of Nonverbal Behavior, 18, 155-184. Bussey, K, Lee, K., & Grimbeek, E. J. (1993). Lies and secrets: Implications for children's reporting of sexual abuse. In G. S. Goodman & B. L. Bottoms (Eds.), Child victims, child witnesses: Understanding and improving testimony (pp. 147-168). New York: Guilford. Caesar, G. (1968). Incredible detective: The biography of William J. Burns. Englewood Cliffs, NJ: Prentice-Hall. Callahan, M., & Bramble, B. (1983). Discovery in construction litigation. Charlottesville, VA: Michie. Carter, G. T. (1980). The right to confront witnesses. American Journal of Trial Advocacy, 4, 464-467. Churchill, L. (1978). Questioning strategies in sociolinguistics. Rowley, MA: Newbury House. Dailey, C. V. (1957). Interrogation of the subject after testing. In V. A. Leonard (Ed.), Academy lectures on lie detection (pp. 84-92). Springfield, IL: Charles C Thomas. Davis, J. D. (1971). The interview as an arena: Strategies in standardized interviews and psychotherapy. Stanford, CA: Stanford University DeLaduranty, J. C., & Sullivan, D. R. (1980). Criminal investigation standards. New York: Harper & Row. Dowling, J. L. (1979). Criminal investigation. New York: Harcourt Brace Jovanovich. Edinburg, G. M., Zinberg, N. E., & Kelman, W. (1975). Clinical interviewing and counseling: Principles and techniques. New York: Appleton-Century-Crofts. Ehrlich, J. W. (1970). The lost art of cross-examination: Or perjury anyone?. New York: Putnam. Ekman, P., & Friesen, W. V. (1974). Detecting deception from the body or face. Journal of Personality and Social Psychology, 29, 286-298. Gallagher, W. H. (1962). Techniques of cross-examination. New York: Practicing Law Institute. Goldstein, N. (1989). Malingering and the evaluation of competency to stand trial. In R. Rosner & R. B. Haron (Eds.), Criminal court consultation (pp. 223-258). ew York: Plenum. Gorden, R. L. (1980). Interviewing: Strategy, techniques, and tactics (3rd ed.). Homewood, IL: Dorsey Gudjonsson, G. H. (1992). The psychology of interrogations, confessions, and testimony. New York: Wiley Hatherill, G. (1971). A detective's story. New York: McGraw-Hill. Ianuzzi, J. N. (1982). Cross-examination: The mosaic art. Englewood Cliffs, NJ: Prentice-Hall. Inbau, F. E., Reid, J. E., & Buckley, J. P. (1986). Criminal interrogation and confessions (3rd ed.). Baltimore: Williams & Wilkins. Irving, B. (1980). Police interrogation:A case study of current practice. Royal Commission on Criminal Procedure, Research Study No. 2. London: Her Majesty's Stationery Office. Irving, B. L., & McKenzie, I. K. (1989). Police interrogation: The effects of the Police and Criminal Evidence Act. London: The Police Foundation. Ivey, A. E., & Authier, J. (1978). Microcounseling: Innovations in interviewing, counseling, psychotherapy, and psychoeducation. Springfield, IL: Charles C Thomas. Johnson, R. (1988). Interviewing adults. In J. M. Dillard & R. R. Reilly (Eds.), Systematic interviewing: Communication skills for professional effectiveness (pp. 140-158). Columbus, OH: Merrill. Kahn, R. L., & Cannell, C. F. (1957). The dynamics of interviewing: Theory, tactics, and eases. New York: Wiley Kaiser, A. (1979). Questioning techniques: A practical guide to better communication. Pomona, CA: Hunter House. Kalbfleisch, P. J. (1985). Accuracy in deception detection: A quantitative review (Doctoral dissertation, Michigan State University, 1986). Dissertation Abstracts International, 46, 4453B. Kalbfleisch, P.J. (1990). Listening for deception: The effects of medium on accuracy of detection. In R. N. Bostrom (Ed.), Listening behavior: Measurement and application (pp. 155-176). New York: Guilford. Kalbfleisch, P.J. (1992). Deceit, distrust, and the social milieu: Application of deception research in a troubled world. Journal of Applied Communication Research, 20, 308-334. Kelner, M. (1981). Testimony and truth. Case and Comment, 86, 9-11. Kenny, J. P., & Moore, H. W., Jr., (1979). Principles of investigation. St. Paul, MN: West. Kestler, J. L. (1982). Questioning techniques and tactics. Colorado Springs, CO: Shepards/McGraw Hill. Killam, E. W. (1977). Interview techniques for children. The Police Chief, 44, 22-24. Knapp, M. L., Cody, M. J., & Reardon, K. K. (1987). Nonverbal signals. In C. R. Berger & S. H. Chaffee (Eds.), Handbook of communication science (pp. 385-418). Beverly Hills, CA: Sage. Krieshok, T. S. (1987). Psychologists and counselors in the legal system: A dialogue with Theodore Blau. Journal of Counseling and Development, 66, 69-72. La Forge, J., & Henderson, P.(1990). Counselor competency in the courtroom. Journal of Counseling and Development, 68, 456-459. Lipkin, M. (1974). The care of patients: Concepts and tactics. New York: Oxford University Press. Lloyd-Bostock, S.M.A. (1989). Law in practice: Applications of psychology to legal decision making and legal skills. Chicago: Lyceum. Logan, F. A. (1959). The Hull-Spence approach. In S. Koch (Ed.), Psychology: A study of a science (Vol. 2, pp. 293-358). New York: McGraw-Hill. MacDonald, J. M. (1987). The confession: Interrogation and criminal profiles for police officers. Denver, CO: Apache. MacHovec, F. J. (1989). Interview and interrogation: A scientific approach. Springfield, IL: Charles C Thomas. MacKinnon, R. A., & Michels, R. (1971). The psychiatric interview in clinical practice. Philadelphia: Saunders. McDonald, H. C. (1963) The practical psychology of police interrogation. Santa Ana, CA: Townsend. McGough, L. S. (1992). Commentary: The occasions of perjury. In S. J. Ceci, M. D. Leichtman, & M. Putnick (Eds.), Cognitive and social factors in early deception (pp. 147-167). Hillsdale, NJ: Lawrence Erlbaum. McQuaig, J. H., McQuaig, P. L., & McQuaig, D. (1981). How to interview and hire productive people. New York: Frederick Fell. Mettler, G. B. (1977). Criminal investigation. Boston: Holbrook. Metzer, K. (1977). Creative interviewing: A writer's guide to gathering information by asking questions. Englewood Cliffs, NJ: Prentice-Hall. Miller, G. R., deTurck, M. A., & Kalbfleisch, P.J. (1983). Self-monitoring, rehearsal, and deceptive communication. Human Communication Research, 10, 97-117. Miller, G. R. & Simmons, H. W. (Eds.). (1974). Perspectives on communication in social conflict. Englewood Cliffs, NJ: Prentice-Hall. Molyneaux, D., & Lane, V. W. (1982). Effective interviewing: Techniques and analysis. Boston: Allyn and Bacon. Morris, J. R. (1973). Newsmen's inter. view techniques and attitudes toward interviewing. Journalism Quarterly, 50, 539-542. Morrison, J. (1993). The first interview: A guide for clinicians. New York: Guilford. Moston, S. J., & Stephenson, G. M. (1992). Predictors of suspect and interviewer behavior during police questioning In F. Losel, D. Bender, & T. Bliesener (Eds.), Psychology and law: International perspectives (pp. 210-218). Berlin: Walter de Gruyter. Moston, S. J., & Stephenson, G. M. (1993). The changing face of police interrogation. Journal of Community and Applied Social Psychology, 3, 101-115. Mulbar, H. (1951). Interrogation. Springfield, IL: Charles C Thomas. Penman, R. (1990). Facework & politeness: Multiple goals in courtroom discourse. Journal of Language and Social Psychology, 9, 15-38. Peskin, S. H. (1978). Attorney-client interviews: Strategy and tactics, Trial, 14, 43-45. Reiser, D. E., & Schroder, A. K. (1980). Patient interviewing: The human dimension. Baltimore: Williams and Wilkins. Rich, J. (1968). Interviewing children and adolescents. London: Macmillan. Robbins, C. E. (1980). How to make dramatic use of witnesses to win at trial. Englewood Cliffs, NJ: Executive Reports. Roblee, C. L., & McKechnie, A. J. (1981). The investigation of fires. Englewood Cliffs, NJ: Prentice-Hall. Rothblatt, H. B. (1982). Vital elements in preparing the witness for cross-examination, Trial, 18, 48-51. Royal, R. F., & Schutt, S. R. (1976). The gentle art of interviewing and interrogation: A professional manual and guide. Englewood Cliffs, NJ: Prentice-Hall. Sayles, L. R., & Strauss, G. (1981). Managing human resources. Englewood Cliffs, NJ: Prentice-Hall. Schwartz, L. E. (1973). Proof, persuasion, and cross-examination: A winning new approach in the courtroom (Vole. 1 & 2). Englewood Cliffs, NJ: Executive Reports. Steller, M., & Boychuk, T. (1992). Children as witnesses in sexual abuse cases: Investigative interview and assessment techniques. In H. Dent & R. Flin (Eds.). Children as witnesses (pp. 47-71). New York: Wiley Stewart, C. J., & Cash, W. B. (1974). Interviewing: Principles and practices. Dubuque, IA: William C. Brown. Stiff, J. B., Kim, H. J., & Ramesh, C. N. (1992). Truth biases and aroused suspicion in relational deception. Communication Research, 19, 326-345. Stoeckle, J. D. (Ed.). (1987). Encounters between patients and doctors. Cambridge: MIT Press. Tierney, K. (1970). Courtroom testimony: A policeman's guide. New York: Funk & Wagnalls. Trial Diplomacy Journal (1981). Obtaining an admission of perjury: Contrasting techniques of cross-examination, 4, 22-34. Van Meter, C. H. (1973). Principles of police interrogation. Springfield, IL: Charles C Thomas. Vrij, A. (1994). The impact of information and setting on detection of deception by police detectives. Journal of Nonverbal Behavior, 18, 117-136. Walters, B. (1970). How to talk with practically anybody about practically anything. Garden City, NY: Doubleday. Weber, O. J. (1981). Attacking the expert witness. Federal Insurance Counsel Quarterly, 31, 299-319. Wellman, F. L. (1953). The art of cross-examination. New York: Macmillan. Yeschke, C. L. (1987). Interviewing: An introduction to interrogation. Springfield, IL: Charles C Thomas. Younger, I. (1982). A practical approach to the use of expert testimony. Cleveland State Law Review, 31(Winter), 1-42. Zuckerman, M., DePaulo, B., & Rosenthal, R. (1981). Verbal and nonverbal communication of deception. In L. Berkowitz (Ed.), Advances in experimental social psychology (Vol. 14, pp. 1-49). New York: Academic Press. ~~~~~~~~ By PAMELA J. KALBFLEISCH, University of Wyoming _________________ Copyright of Journal of Language & Social Psychology is the property of Sage Publications Inc. and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. Source: Journal of Language & Social Psychology, Dec94, Vol. 13 Issue 4, p469, 28p, 1 chart. Item Number: 9703310614 Adaptation and communicative design: Patterns of interaction in truthful and deceptive conversations Human Communication Research Thousand Oaks Jan 2001 -------------------------------------------------------------------------------- Authors: Cindy H White Authors: Judee K Burgoon Volume: 27 Issue: 1 Pagination: 9-37 ISSN: 03603989 Subject Terms: Theory Communication Interpersonal communication Behavior Abstract: Two theoretical frameworks that examine the nature of adaptability and mutual influence in interaction, interpersonal deception theory and interaction adaptation theory, were used to derive hypotheses concerning patterns of interaction that occur across time in truthful and deceptive conversations. Two studies were conducted in which senders were either truthful or deceptive in their interactions with a partner who increased or decreased involvement during the latter half of the conversation. Copyright Sage Publications, Inc. Jan 2001 Full Text: Two theoretical frameworks that examine the nature of adaptability and mutual influence in interaction, interpersonal deception theory and interaction adaptation theory, were used to derive hypotheses concerning patterns of interaction that occur across time in truthful and deceptive conversations. Two studies were conducted in which senders were either truthful or deceptive in their interactions with a partner who increased or decreased involvement during the latter half of the conversation. Results revealed that deceivers felt more anxious and were more concerned about self-presentation than truthtellers prior to the interaction and displayed less initial involvement than truthtellers. Patterns of interaction were also moderated by deception. Deceivers increased involvement over time but also reciprocated increases or decreases in receiver involvement. However, deceivers were less responsive than truthtellers to changes in receiver behavior. Finally, partner involvement served as feedback to senders regarding their own performance. Perhaps the most essential feature of human interaction is that it involves adaptation. Participation in even a simple conversation requires that communicators accommodate and adjust to one another in many ways-taking turns at talk, adjusting rates of speech, nodding and gesturing to clarify or signal understanding. Patterns of adaptation and adjustment "undergird human interactions and relationships" (Burgoon, Stern, & Dillman, 1995, p. 3); they form the basis of interaction and social order. An understanding of patterns of adaptation is therefore necessary for understanding communication and its role in social processes. Although adaptation is present in all interactions, one way to explore the nature and impact of adaptation is to examine communication situations where adjustment and accommodation may be difficult to manage or where adaptation may be disrupted. Deception represents one such communication situation. A considerable amount of research has examined the behaviors that distinguish truthtellers from deceivers, but less attention has been devoted to understanding how deception is managed in ongoing interactions (Buller & Burgoon, 1996). This aspect of deception is important because deception typically occurs in conversation; it requires, as does all conversation, the co-participation of communicators. According to Thompson (1986), "Deception illuminates one of the most fundamental problems of behavioral science: the properties of natural design. The essence of design is the matching of a form of a behavior or structure to the circumstance in which that designed structure or behavior is employed" (p. 53). Many features of communication, both verbal and nonverbal, are "designed" to uphold the typical circumstance of presumed truthfulness (Grice, 1989). For instance, communicators respond to the intent of a question rather than the literal meaning. Deception, however, is a situation where actions only appear to fit the circumstance. Deceptive behavior is "behavior designed to defeat a design" (Thompson, 1986, p. 56). Viewed from this perspective, deception provides a particularly interesting context in which to examine patterns of adaptation and adjustment for several reasons. First, as Thompson (1986) notes, deceptive behaviors are strategically manipulated to match the circumstances in which they occur. As such, deceptive communication may demonstrate the extent of the adaptability of communicative behavior. Second, because most deceptive interchanges are situations in which the goals and motives of senders and receivers are incongruent, deception provides a context where the impact of incongruent goals on the coordination and management of interaction can be explored. Third, deception typically occurs in interpersonal communication contexts where the mutual influence of senders and receivers on one another strongly affects interaction outcomes. The studies reported here were undertaken to examine the nature and limits of interaction adaptation by examining patterns of interaction between partners in both truthful and deceptive conversations. Two theoretical frameworks that consider the role of mutual influence in interaction, interpersonal deception theory (Buller & Burgoon, 1996) and interaction adaptation theory (Burgoon, Stern, et al., 1995), were used to derive hypotheses concerning the influence of deception on communicators' needs and expectations, initial behavior in interaction, the patterns of interaction that emerge in truthful and deceptive conversations, and the impact of behavior patterns on self-evaluation. Two experimental studies were conducted in which senders were either truthful or deceptive in their interaction with a partner. The first study provided information about the influence of needs, expectations, and desires on initial behavior. The second study examined how truthtellers and deceivers adapted their subsequent interaction behavior to changes in receiver involvement. Together the studies suggest that adaptation in interaction is influenced by deception and that partner behavior is a proximal cue influencing how communicators behave in both truthful and deceptive exchanges. Deception in Interactive Contexts Although a large body of literature on deception has emerged, only recently have researchers begun to acknowledge the complexity of managing deception in contexts where senders and receivers are influencing one another. This issue is most directly addressed in interpersonal deception theory (IDT; Buller & Burgoon, 1996). IDT assumes that senders and receivers are active participants who are simultaneously engaged in decoding and encoding tasks and who are oriented toward achieving multiple functions. Perhaps the most important contribution of IDT is its recognition that senders and receivers influence one another's behavior as deceptive interchanges unfold. "The sequencing of actions and reactions between sender and receiver sets up an interaction dynamic in which behavior patterns displayed early differ from those displayed late as each person adjusts to the other's verbal and nonverbal behavior" (Buller & Burgoon, 1996, p. 232). Evidence suggests that deceivers do change their behavior across time, often adjusting so that their communication style appears truthful (Burgoon, Buller, White, Afifi, & Buslig, 1999). Research examining patterns of interaction between senders and receivers reveals that both reciprocity and compensation may occur in deceptive interactions (Burgoon, Buller, Ebesu, et al., 1996; Burgoon, Buller, White, et al., 1999). Moreover, communicators may use the patterns to influence others or control the conversation. Insight into how this adaptation process works can be gained from a recently advanced theoretical framework, interaction adaptation theory (Burgoon, Stern, et al., 1995), which focuses on the processes of adaptation in interaction. Examining Adaptation in Interaction Interaction adaptation theory (IAT; Burgoon, Stern, et al., 1995) proposes that adaptation in interaction is responsive to the needs, expectations, and desires of communicators and affects how communicators position themselves in relation to one another and adapt to one another's communication. Adaptation may take several forms, four of which are considered here: approach, avoidance, reciprocity, and compensation. IAT takes considerable care to provide a set of criteria for distinguishing patterns of adaptation (Burgoon, Stern, et al., 1995; Burgoon & White, 1997). Specifically, when examining adjustments or accommodations of behavior in interaction, behavior can be described in ways that attend to changes in one individual's behavior or in ways that describe the extent to which the behavior of one individual is directed toward and contingent upon the behavior of another individual (Burgoon, Dillman, & Stern, 1993). Movement toward another is defined as approach; movement away is defined as avoidance. Reciprocity occurs when one communicator responds, in a similar direction, to a partner's behavior with behavior of comparable functional value. Compensation occurs when one communicator responds with behavior of comparable functional value but in the opposite direction. However, in some cases, resistance to change in the form of maintenance can also be considered a form of compensation. So, in order for reciprocity or compensation to occur, we must be able to demonstrate that a change in one dyad member's behavior is dependent upon prior actions of the other. Guided by a set of principles addressing the biological, social, and communicative functions of adaptation, "IAT proposes that synchrony, matching, and reciprocity are the default condition in human interaction" (Burgoon, Stern, et al., 1995, p. 13), but compensation may also occur. For instance, although an employee who is reprimanded by a boss may instinctively wish to reciprocate negative affect, the employee may enact a somewhat pleasant and calm demeanor in order to placate the superiora pattern of compensation. Moreover, because of the continual adjustment of communicators to one another, different patterns of behavior may occur within an interaction across time (Burgoon, Stern, et al., 1995; VanLear, 1991). IAT notes that individuals bring to interactions certain requirements that reflect basic human needs, expectations about behavior based on social norms, and desires for interaction based on goals and personal preferences. These required, expected, and desired (RED) elements combine to form a net assessment of the anticipated behavior in interaction called the interaction position (IP). Knowledge of the IP of a communicator is useful for anticipating that individual's initial behavior as well as subsequent strategic moves and responses to changes in a partner's behavior. Consider how an individual's own IP affects the initial choice of a communication style. The choice to be involved, pleasant, informal, and the like should be a function of the person's own needs, desires, and goals (one's own IP). Additionally, patterns of interaction between communicators may be predicted by examining the relationship between the anticipated IP and the actual behavior (A) of the partner. When IP and A match, a stable, reciprocal pattern of interaction is predicted. Discrepancies between IP and A may elicit compensation or reciprocity, depending on the valence of A in relation to IP. For instance, an individual may typically desire very involving, informal conversation, but based on past experience may anticipate that interaction with a coworker will be rather uninvolved and formal. If the individual finds that the coworker is, in fact, very warm and engaging, the warmth and engagement will likely be reciprocated because such behavior is more positively valenced than the IP. Thus, in cases where A is more positively valenced than IP, reciprocity is predicted. Alternatively, when IP is more positively valenced than A, compensation or maintenance (which is assumed to be a form of compensation) is anticipated, at least temporarily. In IAT terms, this is a situation where the interaction position (IP-anticipating a warm, engaging interaction) is more positively valenced than the actual behavior encountered (A-the detached, cold style of the partner). Patterns of Interaction in Deception If, as Thompson (1986) suggests, successful deception involves behavior designed to defeat a design, then a key aspect of deceptive interchanges is likely to be the management of patterns of interaction. Burgoon, Stem, et al. (1995) note that the "presence of ulterior motives and perceptions of trust and honesty" (p. 335) may significantly influence patterns of interaction. To better understand how initial behavior is likely to be affected, we can examine the impact of deception on each of the elements of the RED. First, deceivers and truthtellers should have different goals (desires) for the interaction. Deceivers have a specific goal of producing a believable performance; truthtellers may be cognizant of wanting to be believed, but it is unlikely this desire has the same clear impact that a deception goal does. Second, requirements may differ. Because deception should generate more arousal and negative affect, deceivers should be likely to experience and display more anxiety and arousal than truthtellers. Third, both truthtellers and deceivers should also be aware of the need to conform to social expectations of moderate involvement. Because these preinteraction requirements, expectations, and desires have not been empirically verified, a first research question asked: RQ1: To what extent are there differences in truthtellers' and deceivers, selfassessed requirements, expectations, and desires? In turn, the combined elements of requirements, expectations, and desires should affect initial interaction behavior. The anticipated effects of each of the RED elements on initial behavior are presented in Figure 1. It might appear that these competing goals would offset one another, resulting in a situation where deceivers are able to accurately mimic the initial behavior of truthtellers. However, IAT proposes that the elements of the RED are hierarchically ordered, with requirements exerting a stronger influence on behavior than expectations. Thus, inclinations to show involvement may be overridden by arousal management concerns. Additionally, IDT (Buller & Burgoon, 1996) alerts researchers to the influence of nonstrategic behaviors, which may prevent communicators from enacting the precise communicative display they wish to present. Based on past research, we anticipate that, regardless of self-assessed aspects of pre-interaction factors (which may be impacted by a number of self-awareness processes), deceivers and truthtellers should display different levels of involvement initially in the interaction (Burgoon, Buller, White, et al., 1999). Thus, it is anticipated that: HI: Deceivers display lower levels of initial involvement than truthtellers. The effects of deception on interaction are, however, likely to dissipate to some extent as senders are able to polish their strategic performances and to reduce nonstrategic leakage. If we think about adaptation in terms of changes across time, it is likely that deceivers will increase their involvement across time in a pattern of approach. As a result, it is predicted that: H2: Deceivers display higher levels of approach than truthtellers. Within truthful interactions, expectations and conversational norms, which dictate that communicators will display moderately high levels of involvement in the interaction, should be prominent in determining responses to changes in partner behavior. Requirement level elements such as anxiety should have limited influence, and although individualized goals (the D element of RED) should have some impact on truthtellers, the nature of these goals will vary across interactants. Past research suggests that although moderately high involvement is expected, increases in involvement typically are seen as positive expectancy violations and lead to positive evaluations (Burgoon & Le Poire, 1993; Le Poire & Burgoon, 1994). Increases in involvement are therefore typically reciprocated. In IAT terms, the situation is one where A exceeds the IP Thus, it is predicted that: H3a: Truthtellers reciprocate increased involvement. Decreases in involvement, on the other hand, should be seen as negatively valenced in truthful interactions because they violate social norms (Burgoon & Newton, 1991). Although Burgoon, Stem, et al. (1995) suggest that entrainment is likely when conversation is driven by expectations, truthtellers may be concerned with making the interaction run smoothly and with meeting the social norms of the situation. Within the IAT framework, this type of situation should produce compensation in an attempt to model the socially appropriate behavior. However, compensation is a difficult pattern of interaction to maintain and one that is most likely to be invoked when communicators anticipate that they can have a strong impact or when the partner is positively regarded (Burgoon, Le Poire, & Rosenthal, 1995). Thus, it seems likely that truthtellers will be willing to make only minor adjustments in their own behavior and will, in the face of continuing partner declines in involvement, reciprocate. Thus, it is predicted that: H3b: Truthtellers initially compensate decreased involvement but reciprocate it over time. Deceptive interchanges, however, present a somewhat different set of circumstances. IDT (Butler & Burgoon, 1996) predicts a general pattern of reciprocity in interpersonal deception as a result of adaptation to the normal rhythm and meshing of interaction. This prediction is in line with IAT's (Burgoon, Stern, et al., 1995) assumption that reciprocity is the default condition for interaction. However, the exact nature of responses to change in behavior during interaction should depend on deceivers, requirements, expectations, and desires. Deceivers should be aware of the situational expectations and are likely to have self-presentation concerns that lead them to attempt to enact a pleasant, involved demeanor (Butler, Burgoon, White, & Ebesu, 1994). However, they are also expected to experience anxiety and discomfort during deception-elements which operate at the requirement level and may result in a propensity toward avoidant responses. Moreover, deceivers are engaged in strategic behavior management designed to suppress leakage; so, their ability to enact additional responses designed to strategically manage the conversation may be limited. [IMAGE CHART] Captioned as: Figure 1: Thus, although it is expected that deceivers would want to compensate for decreased involvement, the form that compensation takes may be attenuated by limited behavioral and cognitive resources. Alternatively, when a partner increases involvement, anxiety and negative affect should be suppressed somewhat. Deceivers should be able to use the partner's behavior as a model for their own behavior, and even though they are cognitively busy and currently engaged in strategic behavior management, they should be able to accomplish reciprocity because it is the default pattern for most interactions. Following the reasoning of IDT and using the framework of IAT, it is predicted that: H4a: Deceivers reciprocate increased involvement, H4b: but compensate decreased involvement by maintaining a higher level of involvement. Finally, the effect of patterns of interaction on interaction outcomes is an important area of study which has received very little attention (Burgoon, Stern, et al., 1995). With regard to deceptive interchanges, IDT (Buller & Burgoon, 1996) proposes that interaction outcomes such as perceived success or accuracy are subject to a recency effect so that the behavior of a partner at the end of the interaction becomes an important indicator of whether the interaction was successful or appropriate. Specifically, IDT predicts that evaluations of the interaction by senders should be related to the type of behavior displayed by receivers. Thus, it is predicted that: H5: Senders' (truthtellers and deceivers) evaluation of their own interaction effectiveness is positively related to the level of receiver involvement at the end of the interaction. METHOD Two studies are reported here. The basic design for both studies involved asking senders to be truthful or deceptive in an interaction with an individual they had just met, and asking receivers to serve as confederates by altering the involvement they displayed at a designated point in the interaction. Participants in both studies received identical inductions and completed identical pre-interaction measures. The conversations in Study I were terminated prior to the receiver behavior change. Participants in Study 2 completed the entire interaction and were exposed to the receiver behavior change. In Study 1, we were able to assess the pre-interaction factors of requirements, expectations, and desires as well as the extent to which senders adjusted their involvement behavior during the first half of the conversation. In Study 2, we were able to assess the same pre-interaction factors, sender involvement behavior across the interaction (including responses to receiver behavior change), and sender assessments of perceived self-effectiveness in the interaction. Because the studies were identical in the pre-interaction factors assessed and in the behavior coded in the first half of the interaction, we combined data from both studies to test Research Question 1, Hypothesis 1, and Hypothesis 2. Procedures-Study 1 and Study 2 Participants. Participants for both studies (N = 96 for Study 1; N = 96 for Study 2) were students enrolled in undergraduate communication courses at a large, public university. The majority of participants were drawn from a course required for business administration students and thus were typically not communication majors. As an incentive, course credit was awarded for participation. Participants for each study were paired to create 48 same-sex dyads (24 male; 24 female) for each study (a total of 96 dyads).1 Procedures. Interactions were conducted at an on-campus research site, which includes an apartment-like interview room equipped for videotaping. Upon arrival, participants were randomly assigned to the role of sender (those participants induced to tell the truth or to deceive) or receiver (those participants asked to alter behavior later in the conversation) and were separated for training and instruction. Truth/deception induction. Senders were told that they would be discussing a set of seven moral dilemmas. It was explained that often when people discuss such situations, they tend to answer with the "socially desirable" answers rather than give their true opinion, and this study was designed specifically to investigate conversations in which truth and deception were present? Participants instructed to be truthful were to be completely honest as they discussed what they would do in the dilemmas. Participants instructed to deceive were told that one of the issues of interest in the research was understanding how people go about presenting views or opinions which they do not really believe, and they were told that the ability to do so was sometimes interpersonally necessary They were instructed to misrepresent or contradict their true attitudes and opinions about each of the dilemmas. Involvement increase/decrease induction. Receivers were given the same explanation of the nature of moral dilemmas as senders. Then, they were told that the focus of the study was understanding how individuals respond to different communication styles. It was explained that in order to investigate communication style, they would be asked to enact their natural communication style during discussion of the first three dilemmas and then alter their style at the beginning of discussion of the fourth dilemma. Receivers were then given instructions on the types of behavior which should accompany increased involvement and decreased involvement. To provide a situation where the change in receiver behavior would not be "leaked" to senders in the beginning of the interaction, receivers were cued to change their behavior by a color-coded dot on the dilemma card. Pre-interaction measures. The pre-interaction measures were designed to assess requirements, expectations, and desires, which would serve as a general assessment of the interaction position held at the beginning of the interaction. Identical pre-interaction measures were completed in Study 1 and Study 2. Because a comparison showed no significant differences in the two samples in terms of ratings on pre-interaction measures or initial behavior, reliabilities for the pre-interaction measures were calculated with the combined samples (N=192). Participants completed the pre-interaction measures after receiving the induction for their role as sender or receiver in the upcoming interaction. Requirements, expectations, and desires (components of the IP) were measured using scales composed of bipolar adjective items. Two measures provided information about requirements: (a) ratings of anticipated affect during the interaction, Cronbach's coefficient (x = .87 (3 items), and (b) ratings of concern about anxiety management during the interaction, a = .86 (5 items). Expectations were measured through ratings of how involved participants expected to be in the interaction, ox = .80 (5 items) and how pleasant they expected to be, a = .84 (3 items). Desires were measured via self-presentation items, a = .87 (5 items).3 Discussion. After receiving instructions and completing pre-interaction measures, participants were taken to the interaction room, where they were asked to discuss each dilemma in a way that allowed both participants to share their attitudes or opinions. In Study 1, participants were allowed to discuss three topics and then were interrupted by the experimenter; the interruption was made just prior to the point in the discussion when the manipulation would have been enacted. Participants were told that before completing the rest of the discussion, some assessments of the interaction were needed. In actuality, the experiment was terminated after the measures were completed. In Study 1, discussions lasted approximately five minutes. In Study 2, the receiver enacted the increase or decrease in involvement beginning with the fourth topic and continued to enact the manipulation for the remainder of the interaction. This manipulation allowed us to assess how truthtellers and deceivers adapted to changes in receiver behavior. Although participants were expecting to discuss seven topics, the discussion was stopped after the sixth topic. Interaction was stopped after the sixth topic to avoid the possibility that a natural change in involvement (either a decrease or increase) might accompany the last topic of discussion as participants anticipated the end to the conversation. Discussions in Study 2 lasted approximately 10 minutes. Post-interaction measures. Following the interaction, senders (truthtellers and deceivers) provided ratings of the extent to which they had been truthful and the motivation they had felt to be convincing. As a manipulation check, senders were asked to rate the truthfulness of the response they gave to each dilemma on a scale of 1 to 10, with 1 being completely untruthful and 10 being completely truthful. Instructions for this scale stressed the importance of participants indicating the veracity of what they actually said, regardless of the instructions they had been given prior to the interaction.4 In Study 2, senders (n = 48) provided ratings of the extent to which they perceived they had been effective in the interaction. Senders completed this four-item measure with regard to their perceptions after topics one, four, and six (a = .87). Both senders and receivers (N = 96) provided an assessment of the difficulty of fulfilling their roles in the interaction. Senders rated the difficulty of presenting their viewpoint in the interaction, and receivers rated the difficulty of changing their behavior in the interaction. Three items elicited ratings of difficulty (not at all to very difficult), effort required (very little effort/very much effort), and the extent to which the task was very easy/very hard, a = .79. Coded Nonverbal Behavior-Study 1 and Study 2 Pairs of coders (N = 4) were trained to rate sender and receiver involvement and pleasantness; although coders knew the basic issues being examined in the research, they were blind to the specific hypotheses. Working from the videotapes and watching only the sender or receiver (whichever they had been assigned), coders provided ratings after each topic was completed in the discussion. Although there was some variability in the amount of time each dyad spent discussing each topic, coding by topic of discussion, rather than by predetermined time periods, allows for the assessment of segments of behavior which correspond directly to the flow of the interaction (Bakeman & Gottman, 1997). Global ratings of involvement were made on a 9-point, bipolar scale consisting of the following items: very uninvolved/very involved, very detached/very engaged, very nonimmediate/very immediate, very inexpressive/very expressive, very inattentive/very attentive (interitem a = .93). Interrater reliability, based on Ebel's intraclass correlation (Guilford, 1954), was .96 for sender ratings and .97 for receiver ratings.5 RESULTS Analysis Plan Study 1 was designed as a 2 (truth/deception) x 2 (receiver change: increase/decrease) x 2 (role: sender/receiver) x 3 (time periods) mixed model analysis of variance, with two between-subjects factors (truth/deception and receiver change) and two within-subjects factors (role and time). Study 2 followed a similar design, but included the time factor, which reflected six rather than three time periods. Initial analyses of the data from Study 1 and Study 2 indicated minimal differences in the pretest measures and initial behavior displayed by participants. Thus, where appropriate, data from the two studies were pooled for analyses.6 A cyclic counterbalancing procedure was used to control for the effects of topics discussed; it involved rotating topic order so that each topic was discussed first, second, third, and so on, an equal number of times. However, when examining changes in behavior over time, the cyclic counter-- balancing technique does not clearly separate topic effects from time effects (Keppel, 1991). In order to manage the influence of topic and time effects, involvement scores for senders and receivers-the primary dependent measures-were residualized for the effects of topic (as were receiver pleasantness scores for the manipulation check).7 Additionally, it should be noted that several of the repeated-measures analyses could not be conducted when all factors were included in the model because this analysis produced a singular variance-covariance matrix. When this occurred, the within-subjects factor of role-which indicated whether the participant was a sender or receiver-was not included for analysis. Analyses showed the data met the conditional homoscedasticity assumption in all cases (F^sub max^ < 3.0) and the normality of cells assumption in all cases. Where the compound symmetry conditions for repeated measures analyses were violated, Huynh-Feldt corrected degrees of freedom were used. Manipulation Checks and Preliminary Analyses Deception. Senders rated the honesty of each response they gave on a 10-point scale, ranging from completely untruthful (1) to completely truthful (10). The combined data from Study 1 and 2 revealed that deceivers rated their responses as considerably less truthful than truthtellers (deception M = 2.61; truth M = 9.23), F (1, 94) = 305.55, p < .001, n^sup 2^ = .77. Because we allowed participants to discuss each topic for as long as they felt was appropriate, it was possible that truthful and deceptive conversations could differ significantly in length, which could in turn influence other aspects of the interaction. However, a comparison of the length of truthful and deceptive conversations revealed no significant differences in either study.8 Involvement. Receivers were asked to alter their behavior beginning with discussion of the fourth topic in Study 2 and to sustain the behavior throughout the remainder of the interaction. This type of change should produce a linear trend for receiver involvement in each condition. To assess whether receivers performed this manipulation, a mixed model analysis of variance on coder ratings of receiver involvement was conducted, with the six time periods as a within-subjects factor and truth/deception and manipulation (increase/decrease) as between-subjects factors. This analysis, which utilized Huynh-Feldt corrected degrees of freedom, revealed a main effect for time, F (2.45, 107.72) = 27.50, p < .001, partial n^sup 2^ = .39, and, as expected, a significant manipulation x time interaction, F (2.45, 107.72) = 106.81, p <.001, partial n^sup 2^ =71, indicating that receivers enacted differing levels of involvement in the increase and decrease conditions.9 Trend analyses for time periods three through six revealed a significant linear trend x increase/decrease condition, F (1, 83) = 140.76, p < .001, partial 112 = .63; significant quadratic and cubic trends also emerged, but they accounted for significantly less variance than the linear trend. Simple effect tests confirmed the linear trend within each manipulation: increased involvement, F (1, 83) = 27.96, p < .001, partial 712 = .25; decreased involvement, F (1, 83) = 113.33, p < .001, partial n^sup 2^ = .58, with quadratic and cubic trends emerging but with less strong effects. These results indicate that receivers did change their involvement in the manner directed and sustained the change across time.10 The means for receiver involvement are presented in Table 1. Research Question 1 Research Question I asked about potential differences in ratings of requirements, expectations, and desires for truthtellers and deceivers. Data from Study 1 and Study 2 were combined for this analysis because participants in both studies completed identical measures of these elements. Analyses examining the differences between truthtellers and deceivers were conducted for each of the RED measures: affect and anxiety (R elements), expectations about their own involvement and pleasantness during the interaction (E elements), and self-presentation goals (D elements). Independent groups t-tests revealed that senders instructed to deceive reported higher levels of anxiety management concern than truthtellers, t (94) = -2.46, p < .02, d = .51, and higher concern with self-presentation goals than truthtellers, t (94) = -3.07, p < .01, d = .63. No significant differences emerged for ratings of affect or expectations. The means for each of the measures are presented in Table 2. Hypothesis 1 The first hypothesis predicted that deceivers would display lower levels of initial involvement than truthtellers. Again, the data from Study 1 and Study 2 were combined. Because this analysis explored only the first time period, it was possible to include topic as a between-subjects factor in the analysis. A condition x manipulation x topic analysis of variance on sender involvement scores revealed only a significant main effect for condition, F (1, 95) = 10.83, p < .01, n^sup 2^ = .10. Deceivers were significantly less involved in the initial topic of discussion than truthtellers (deception M = 4.9, truth M = 5.31). Thus, initial differences in self-reported requirements and desires were accompanied by behavioral differences, as predicted. Hypothesis 2 Hypothesis 2 predicted that deceivers would display higher levels of approach than truthtellers. In order to examine this prediction, an analysis of variance was conducted on sender involvement across the three time periods prior to the manipulation of receiver behavior. Sender involvement scores were adjusted for topic effects. Results revealed a main effect for time, F (1.88, 173.25) = 6.22, p < .01, partial n^sup 2^ = .06, and a condition x time interaction, F (1.88, 173.25) = 4.37, p < .02, partial n^sup 2^ = .05. Simple effect tests revealed a significant linear trend for deceivers, F (1, 94) = 13.21, p < .001, partial n^sup 2^ = .12, but no trend for truthtellers, F < 1.0. Consistent with the hypothesis, deceivers showed increasing involvement across the three time periods, indicating approach, while truthtellers showed no change in involvement across the three time periods (see Table 3). Hypotheses 3 and 4 Hypotheses 3 and 4 predicted different patterns of adaptation within truth and deception. Truthtellers were expected to reciprocate increased involvement and to initially compensate decreased involvement, but to reciprocate the decrease over time. Deceivers were expected to reciprocate increased involvement and to display compensation in the form of maintenance in response to decreased involvement. Essentially, these predictions anticipate different trends in adaptation across time for truthtellers and receivers and can be tested by examining both the omnibus tests and trends within conditions. [IMAGE TABLE] Captioned as: TABLE 1 [IMAGE TABLE] Captioned as: TABLE 2 The omnibus condition x manipulation x time repeated measures analysis of variance on sender involvement revealed a significant main effect for time, F (3.75,165.07) = 2.15, p < .04, partial n^sup 2^ = .05, a significant condition (truth/deception) x time interaction, F (3.75, 165.07) = 3.25, p < .01, partial 112 = .07, and a significant time x manipulation interaction, F (3.75, 165.07) = 13.08, p < .01, partial 112 = .23. The anticipated three-way interaction, which would reflect the differential impact of truth/deception in the increase/decrease conditions across time, did not emerge; however, the two-way interactions between time and truth versus deception, and time and receiver change, indicate that sender behavior differed for truthtellers/deceivers in response to receiver change (as predicted). At the dyadic level, intraclass correlations between senders and receivers computed within each time period showed stronger responses to reduced than to increased involvement, with senders compensating for receivers' declines in involvement, consistent with Hypothesis 4 (see Table 4).11 [IMAGE TABLE] Captioned as: TABLE 3 The impact of receiver behavior change on sender behavior was further investigated with 1 degree of freedom contrasts within conditions. One way to think about predictions for reciprocity and compensation is as an interrupted time-series model (Burgoon, Stern, et al., 1995). Specifically, an interrupted time-series model suggests that one should consider the impact of the manipulation on both the mean level of the behavior (the intercept) and on the change across time in response to the manipulation (the slope). Framed in this manner, the hypotheses imply that there should be a change in intercepts due to the change in confederate behavior as well as changes in slopes due to differential effects of truth and deception on behavior over time. To test the intercept change, we conducted a planned contrast that compared the mean level of sender involvement for the baseline time periods to the mean level of involvement for the periods in which receivers increased or decreased involvement (using contrast codes -1, -1, -1, 1, 1, 1). This analysis is roughly equivalent to testing the "interruption" effect in time-series models. A second set of planned contrasts tested for trends in the data across time. This analysis is analogous to testing the slope in time-series models by fitting orthogonal polynomial coefficients to the last four time periods." Although it would be possible to test for a cubic trend, this analysis was employed in only one instance where the data suggested it was warranted. Where compound symmetry assumptions were violated, the contrast analyses employed Huynh-Feldt corrected degrees of freedom. Means for sender involvement across time are presented in Table 5. Hypothesis 3, which predicted (a) reciprocity by truthtellers in response to increased involvement and (b) compensation followed by reciprocity in response to decreased involvement, was tested within the truth condition, comparing responses to the increase and decrease manipulation. Results revealed a time block x increase/decrease interaction, F (1, 84) = 25.40, p < .001, n^sup 2^ = .23, such that senders slightly increased involvement in response to receiver increases (premanipulation M = 5.23, averaged across Time 1 through Time 3; postmanipulation M = 5.33, averaged across Time 4 through Time 6) and decreased involvement in response to receiver decreases (premanipulation M = 5.15, averaged across Time 1 through Time 3; postmanipulation M = 4.77, averaged across Time 4 through Time 6). Thus, the analysis of mean changes indicates that truthtellers displayed a general pattern of reciprocity in response to both increased and decreased involvement. Tests of the trends across time indicated a linear trend x manipulation interaction, F (1, 84) = 4.71, p < .03, n^sup 2^ = .05, and simple effect analyses conducted on each set of means revealed linear trends in each condition: increase F (1, 84) = 5.52, p < .02, rI = .06; decrease F (1, 84) = 29.26, p < .001, n^sup 2^ = .26. These analyses of mean differences indicate the expected pattern of reciprocity emerged in response to increased involvement, but the anticipated initial compensation did not emerge for decreased involvement. Truthtellers reciprocated both increases and decreases in involvement in a linear fashion (see Figure 2a). However, truthtellers did appear to resist the decrease in involvement displayed by their partners. In response to the rather substantial decrease displayed by receivers in enacting the manipulation (Time 3 M = 5.27; Time 4 M = 3.60), senders showed only a modest accommodation (Time 3 M = 5.19; Time 4 M = 4.94) with reductions in involvement emerging over time as receivers continued to decrease involvement. The intraclass correlation analyses also imply that truthtellers did not match the decline of their partners, although their level of involvement did decrease. [IMAGE TABLE] Captioned as: TABLE 4 [IMAGE TABLE] Captioned as: TABLE 5 Hypothesis 4 predicted that deceivers would (a) reciprocate increased involvement by receivers and (b) compensate-via a pattern of maintenance-decreased involvement by receivers. Three sets of analyses were conducted to test this hypothesis. Results revealed a time block x increase/ decrease interaction, F (1, 72) = 30.25, p < .001, n^sup 2^ = .30, which indicated a basic pattern of reciprocity in response to both increased involvement (premanipulation M = 4.83, averaged across Time 1 through Time 3; postmanipulation M = 5.06, averaged across Time 4 through Time 6) and decreased involvement (premanipulation M = 5.09, averaged across Time 1 through Time 3; postmanipulation M = 4.87, averaged across Time 4 through Time 6). Contrasts testing changes in deceiver behavior across time revealed a linear trend x manipulation interaction, F (1, 72) = 11.49, p < .001, n^sup 2^ .14. Deceiver involvement remained relatively stable across time in response to receiver increases but decreased across time in response to receiver decreases (see Figure 2b). Simple effect trend analysis revealed no significant linear or quadratic trend for sender involvement in response to increased involvement, contrary to expectations. It had been anticipated that deceivers would reciprocate increased involvement because it would be positively valenced and because it could make it easier for deceivers to display similar behavior. [IMAGE GRAPH] Captioned as: Figure 2a: Figure 2b: In response to decreased involvement, it was predicted that deceivers would display compensation in the form of maintenance. In terms of trends, this hypothesis predicts that deceivers in the decrease condition should seek to offset partner decrease and should continue to do so across the manipulation time periods. The means indicate that, contrary to the hypothesis, deceivers were initially responsive to reduced receiver involvement but compensated subsequently. Exploratory simple effect trend analyses revealed significant linear and cubic trends, linear F (If 72) = 12.82, p < .001, n^sup 2^=.15; cubic F (1, 72) = 4.62, p < .04,112= .06 (cubic contrast codes 0, 0, If -3, 3, -1), such that deceivers increased involvement at Time 5 then declined again. The negative intraclass correlations, which indicate compensation in response to decreased partner involvement, also suggest that deceivers' behavior may have offset partner behavior. Examined together, these results suggest that deceivers enacted both reciprocity and compensation in response to receivers' reduced involvement, possibly in an attempt to regain a more normal level of involvement and to stabilize the interaction. Finally, the effects of deception on adaptation may be more clearly understood when viewed in comparison to the adaptation of truthtellers. Thus, a third set of analyses explored the nature of change across time comparing truthful and deceptive senders. Analyses that compared truth and deception within the two involvement conditions revealed an overall pattern of reciprocity, but one which was influenced by the type of involvement receivers displayed. Within the increase condition, a linear trend emerged, F (1, 66) = 6.10, p < .02,112 = .09, which indicated that senders increased involvement across time regardless of whether they told the truth or deceived. That is, increased receiver involvement elicited reciprocity from deceivers in the same manner as truthtellers. Within the decrease condition, a linear trend also emerged, F (1,101) = 44.76, p <.001, 112 =31; additionally, the linear trend x truth/deception interaction approached significance, F (1, 101) = 2.62, p <.06, n^sup 2^=.03. Both truthtellers and deceivers decreased involvement, but truthtellers showed a steeper and steadier decline than deceivers; the latter showed more adjustment and resistance. This pattern would be characterized as compensatory in the sense of maintaining a more involved pattern. The trend lines are redisplayed in Figures 3a and 3b to make the comparison between truth and deception evident. In sum, Hypothesis 3a was supported; 3b was partially supported. Truthful senders increased involvement in response to partner increases. They did not actively compensate initial decreased involvement as predicted, but they did display the predicted reciprocity of decreased involvement in the later phases of the interaction. Moreover, the mean difference between sender and receiver involvement during the initial decrease time period and the negative intraclass correlations are suggestive of attempts to offset decreased receiver involvement somewhat. Hypothesis 4 received mixed support. Deceivers reciprocated increased involvement, as predicted, but evidence of compensatory maintenance in the face of partner reductions was mixed. The intraclass correlations show compensation. The mean difference analyses indicate that deceivers reciprocated the decline but in an attenuated and variable fashion. Hypothesis 5 The fifth hypothesis predicted that sender evaluations of their own success in the interaction would be related to the level of involvement displayed by the receiver at the end of the interaction. This prediction is based on the assumption that senders interpret receiver behavior as relevant information about their own performance. A strong, positive relationship between sender evaluations of their own communication effectiveness and receiver involvement was found, r (46) = .49, p < .001. However, this analysis ignores any effect that truth/deception might have on effectiveness and involvement. Regression analyses that included the receiver involvement x truth/deception term revealed that the interaction did not account for significant, additional variance. Thus, receiver behavior influenced how effective senders evaluated themselves to be, with senders feeling they were more effective when receivers displayed more involvement. DISCUSSION To successfully adapt to the complex terrain of interaction, communicators must manage their own needs, expectations, and desires while accommodating and adapting to the ever-changing interaction landscape. These studies help to reveal the nature of the "maps" truthtellers and deceivers create, and the ways in which patterns of interaction emerge across deceptive and truthful interactions. In accord with the principles of IAT (Burgoon, Stern, et al., 1995), we examined whether pre-interaction maps-in the form of requirements, expectations, and goals-influenced initial communication behavior. In these studies, deception and truth reflect manipulated interaction goals that should have affected the anxiety management concerns and self-presentation desires of communicators. Deceivers felt more anxious and were more concerned about self-presentation than truthtellers prior to the interaction; that is, truthtellers and deceivers approached the interaction from different "interaction positions." In turn, these differences in interaction position were reflected in initial behavior, with deceivers showing less involvement than truthtellers during discussion of the first topic. [IMAGE GRAPH] Captioned as: Figure 3a: Figure 3b: IAT (Burgoon, Stern, et al., 1995) also postulates that different interaction positions prompt different adaptation patterns. Deceivers displayed a clear pattern of approach across the first half of the interaction, a finding which supports the IDT prediction that deceivers are able to adjust their communication performances (Buller & Burgoon, 1996). By increasing involvement as the interaction progressed, deceivers were able to more closely approximate appropriate levels of involvement and thereby imitate the displays of truthtellers. It is interesting to note that, comparatively, truthtellers showed virtually no change across the same segment of the interaction. So, whereas the increase in involvement displayed by deceivers appeared to be an attempt to match normal levels of involvement in interaction, it may have missed the mark with regard to the natural progression of interaction insofar as it involved a different pattern of interaction than that enacted by truthtellers. If we think of deception as representing a situation where incongruent goals are operating, this result suggests that interaction may be somewhat less stable in such situations as communicators seek to manage conflicting demands of the situation. It was predicted that truthful senders would reciprocate increased receiver involvement, and they would initially compensate decreased involvement but then reciprocate as the interaction progressed. Truthtellers did respond to changes in receiver behavior by adapting, primarily displaying reciprocity. They increased involvement slightly across time in response to increased receiver involvement, and they decreased involvement across time in response to decreased receiver involvement. But evidence of compensation, as an initial response to the decrease, was not found. It is interesting to note that although truthtellers did decrease their involvement in response to receiver declines (reciprocity), they remained clearly more involved than receivers throughout the remainder of the conversation. This finding highlights an important feature of reciprocity and compensation and bolsters the argument that compensation should be operationalized more broadly to include not only movement in the opposite direction of the partner but also resistant behavior which is enacted within a frame of reciprocity. Burgoon, Stern, et al. (1995) describe this phenomenon as "putting on the brakes" to prevent a precipitous decline in the dyadic involvement level. Deceivers were expected to reciprocate increased partner involvement but to compensate decreased partner involvement via a pattern of maintenance. Deceivers did show a slight, albeit nonsignificant, increase in response to increased receiver involvement, but the increase leveled off by interaction's end. The more limited adaptation displayed by deceivers may be related to the way in which deceivers interpreted increased receiver involvement. The results concerning sender perceptions of feedback from receivers suggest that deceivers saw increased involvement by receivers as positive feedback. Thus, they may have seen little need to make adjustments and thus did not reciprocate as much as truthtellers. The findings regarding deceiver response to decreased partner involvement are not conclusive, but they provide some insight regarding the nature of adaptation. Both truthtellers and deceivers responded to decreased involvement by reducing involvement. However, truthtellers displayed a steady decline in involvement whereas deceivers displayed more variability-decreasing initially, increasing involvement in the middle time period, but then decreasing involvement again in the final time period. We believe that it is likely this variability in deceiver involvement reflects deceivers' attempts to be responsive both to the interaction style of the receiver and to the expectations of "normal" interaction behavior. However, further research is needed to explore the ways in which competing demands of interaction influence the types of adjustments communicators make. One final issue of concern was the impact of receiver behavior on sender evaluations of the interaction. Specifically, IDT (Buller & Burgoon, 1996) predicts that evaluations of the interaction should be strongly influenced by the behaviors that receivers display. The results of Study 2 endorse this view. Sender assessments of the effectiveness of the interaction appear to be related to the message communicated, via nonverbal involvement, by the receiver. This finding is important because it indicates that partner behavior is an important source of information for evaluating one's own performance. However, a limitation of this study is that the time available for subject participation did not permit exploring the meaning which senders attributed to receiver behavior. The results of the two studies reported here have several interesting implications for interpersonal deception theory (Buller & Burgoon, 1996) and research on deception. The assumption that truthful and deceptive performances are distinguished primarily by differences in nonstrategic leakage oversimplifies the processes articulated here. Deceivers adjusted their behavior across time, which indicates the importance of examining changes in behavior. Also, the patterns of interaction displayed are complex. The IDT prediction of reciprocity is accurate at some level-both truthful and deceptive senders showed some evidence of reciprocity-- but the trend analyses indicate that truthful senders appear to be more closely approximating receiver behavior across time than deceivers. This finding suggests that deceiver attempts to defeat the design of natural interaction may be both beneficial and problematic. By not reciprocating, deceivers may be able to maintain a demeanor more in line with expected truthful performances, and they may be able to influence receiver behavior, thereby adjusting the tenor of the interaction. In face-to-face deceptive exchanges, it is likely that disjointedness of interaction leads to suspicions by receivers and observers that something is "amiss." This idea has been suggested by Miller and Stiff (1993) and Buller and Burgoon (1996), among others, and documented by Stiff, Kim, and Ramesh (1992), who found that changes across time in response latency better distinguished truthtellers from deceivers than pure differences in mean levels of response latency. Although the focus of this study is not on the identification of patterns that can be used to detect deception, the results do suggest a need to carefully examine the changes and magnitudes of adjustments that occur across conversational episodes. These results highlight the value of IAT (Burgoon, Stern, et al., 1995) as a framework for understanding patterns of interaction. The impact of different goals on patterns of interaction is perhaps the most important issue examined here. Deceivers showed strong approach tendencies during the baseline interaction while truthtellers showed virtually no change in involvement. To the extent that these patterns are communicative, deceiver behavior could be read as "warming" to the interaction, whereas truthteller behavior could be read as following more standard norms of polite behavior. If initial interaction strikes a tone for the rest of the conversation, deceivers may be at a disadvantage in that their dampened initial performances cannot be easily overcome. On the other hand, it is possible that this "warming" has a positive effect on how receivers perceive senders, because the interaction improves as it goes along. However, one important limitation of these studies is that deceivers were asked to misrepresent their attitudes and opinions without any apparent consequences of these behaviors. Although communicators do sometimes deceive in situations where consequences are not apparent, it seems likely that the adaptation of deceivers in highly consequential situations would be different than the actions of the deceivers observed here. In response to decreased involvement, deceivers displayed an initial decline in involvement. This initial decline was, however, followed by what appears to be an active attempt to redirect the course of the interaction through increased involvement; a later decrease in involvement appears to be a response to continued declining receiver involvement. This result highlights an important contribution of IAT: the foregrounding of behavior in the interaction as a key determinant in eliciting partner responses. The behavior one communicator displays is, to some extent, always contingent on what the interaction partner does. We take this to be encouraging news for communication researchers, but we note that the conclusions we can draw from the current studies are limited because receivers' behavior may have become somewhat unnatural once they attempted to enact the involvement changes. Taken together, the results of these studies point to some fruitful areas for future research. One of the things that distinguishes truthful from deceptive interactions is the extent to which communicators are able to coordinate their communicative behavior and achieve interactional synchrony (Bernieri & Rosenthal, 1991). Past research has revealed that deceptive messages may generate the perception that something is "not quite right" (Bond, et al., 1992). In interactive deception, this feeling may be generated by subtle differences in the way adaptation occurs across time in the interaction. It seems likely that deceptive interactions are marked by lower levels of synchrony and coordination of interaction than truthful interactions. If deception constitutes an event where communicators have incongruent goals, then one of the ways incongruent goals may interfere with interaction is at the most basic level-coordination of nonverbal behaviors. Of course, the coordination that ultimately emerges in interaction is influenced by both parties. In the case of deception, receivers may differ in their ability to accommodate deceiver misadjustments. Further research that explores synchrony and interpersonal coordination interaction seems warranted, as does research that explores the way in which communicators respond to partner adaptation that is nonnormative. Finally, future research that explores both verbal and nonverbal behavior will provide a more complete description of the patterns of interaction and the features of communication that are most and least responsive to partner influence. Such research, coupled with work on the relationship between pre-interaction factors and interactional behavior, will help to reveal the communicative design from which adaptation emerges. NOTES 1. The decision to pair participants in same-sex dyads was made because some previous research suggests that women may accommodate to men in cross-sex interactions (Giles, et al., 1990). We expected that this type of accommodation, which seems to be related to sex role socialization, might influence the findings of this study. It is clearly possible that there could be gender differences between male and female dyads in same-sex interaction. This possibility was examined in initial analyses. 2. The topics concerned (a) calling in sick to work, (b) talking with the spouse of a friend who was being unfaithful, (c) dealing with a minor auto accident, (d) accurate reporting of all income taxes, (e) giving an opinion about a creative endeavor of a friend, and (f) maintaining a code of silence while in service on a jury. Topic order was counterbalanced to reduce the possibility that the progression of the conversation would influence how each topic was discussed. The topics selected reflected situations which could be labeled as social dilemmas in that they focused on circumstances in which there was a tension of some sort, typically between what would be good for the individual and what was socially appropriate. Social dilemmas were selected for discussion for two reasons. First, they were likely to generate more interest than other types of conversational topics because they asked participants to reason through the consequences of behavior. Second, the topics were selected because they provided a situation where a number of responses were plausible, so that there was not a clear correct choice. 3. Measures of arousal management (anxiety) and self-presentation were modified versions of scales used by Dillard, Segrin, and Harden (1989). 4. The RED elements measured at the end of Study I were the same RED elements measured prior to the interaction, with one exception. Measures of expectations asked about expected involvement and pleasantness of partner rather than of self. This adjustment reflects the IAT (Burgoon, Stern et al., 1995) assumption that once interaction has taken place, expectations are related to actual behavior. Because these data are primarily descriptive, they are not reported here. 5. Ratings of involvement reflected global assessments of the extent to which participants enacted behaviors that convey involvement such as forward lean, proximity, partner orientation, and gaze. We believe that global assessments of the relational messages conveyed in interaction provide more accurate assessments of the nature of the interaction than assessments of the frequency of display of specific behaviors. (Refer to Burgoon & Baesler, 1991, and Burgoon, Stern, et al., 1995, for further examination of the impact of micro versus more global assessments of behavior.) 6. Because some research has indicated that gender may influence deception displays (Cody & O'Hair, 1983), and the ways in which communicators accommodate a partner's style (Montepare & Vega, 1988), gender was included in initial analyses. No significant differences were obtained, so gender was not included in any further analyses. 7. This procedure is similar to removing practice effects in a repeated measures analysis (Keppel, 1982). 8. We did not assess the amount of time spent on each topic. 9. Additionally, analyses were conducted on receiver pleasantness in order to provide a more complete profile of the demeanor of receivers during the interaction. Results revealed a main effect for time, F (2.41,105.86) = 48.17, p <.001, partial y2= .52, and a time x manipulation interaction, F (2.41,105.86) = 68.32, p < .001, partial il 2= .61. Examination of the means for involvement and pleasantness, which are presented in Table 1, indicates that increased involvement by receivers was accompanied by slightly increased pleasantness, whereas decreased involvement by receivers was accompanied by decreased pleasantness. Although receivers were given instructions about altering involvement, they appear to have enacted a behavioral profile that involved either increasing involvement and conveying positive regard or decreasing involvement and conveying negative regard. 10. The quadratic and cubic trends for receiver involvement accounted for only a small portion of the variance in relation to the linear trend. However, the presence of these trends indicates that receivers were adjusting throughout the conversation. It is possible, therefore, that sender adaptation reflects this variability in receiver behavior. Although using confederate receivers, as has been done in some prior research (see e.g., Burgoon, Stern, et al., 1995), would have afforded greater experimental control, we opted to have participants enact the change in receiver behavior so as to capture the natural evolution of typical conversation. 11. Intraclass correlations that reflected the effect of both truth/deception and receiver increase/decrease show more variability than the correlations for just increase/decrease. We have not included these because the small n for each correlation (n = 12) makes it difficult to interpret what the variability means, although the pattern for increase and decrease is still borne out. 12. Although it would be feasible to test for trends across the three manipulation time periods (i.e., periods four, five, and six), this approach would not capture information about initial behavior during the manipulation in relation to behavior before the manipulation; information about a sender's behavior before the manipulation is essential to determining if the pattern exhibited across the manipulation time periods is indicative of reciprocity or compensation. Tests for linear and quadratic trends were conducted by applying the following contrast codes: 0, 0, -3, -1,1, 3 for the linear trend and 0, 0, -1, 1,1, -1 for the quadratic trend. REFERENCES Bakeman, R., & Gottman, J. M. (1997). Observing interaction: An introduction to sequential analysis (2nd ed.). New York: Cambridge University Press. Bernieri, F. J., & Rosenthal, R. (1991). Interpersonal coordination: Behavioral matching and interactional synchrony. In R. S. Feldman & B. Rime (Eds.), Fundamentals of nonverbal behavior (pp. 401-432). New York: Cambridge University Press. Bond, C. R., Omar, A., Pitre, U., Lashley, B. R., Skaggs, L. M., & Kirk, C. T. (1992). Fishylooking liars: Deception judgment from expectancy violation. Journal of Personality and Social Psychology, 63, 969-977. Buller, D. B., & Burgoon, J. K. (1996). Interpersonal deception theory. Communication Theory, 6, 203-242. Butler, D. B., Burgoon, J. K., White, C. H., & Ebesu, A. S. (1994). Interpersonal deception: VII. Behavioral profiles of falsification, equivocation, and concealment. Journal of Language and Social Psychology, 13, 366-395. Burgoon, J. K., & Baesler, E. J. (1991). Choosing between micro and macro nonverbal measurement: Application to selected vocalic and kinesic nonverbal indices. Journal of Nonverbal Behavior, 15, 57-78. Burgoon, J. K., Buller, D. B., & Guerrero, L. K. (1995). Interpersonal deception: IX. Effects of social skill and nonverbal communication on deception success and detection accuracy. Journal of Language and Social Psychology, 14, 289-311. Burgoon, J. K., Butler, D. B., Ebesu, A. S., White, C. H., & Rockwell, P. (1996). Interpersonal deception: XI. Effects of suspicion on communication behaviors and perceptions. Communication Theory, 6, 243-267. Burgoon, J. K., Butler, D. B., White, C. H., Afifi, W. A., & Buslig, A. (1999). The role of conversational involvement in deceptive interactions. Personality and Social Psychology Bulletin, 25, 669-685. Burgoon, J. K., Dillman, L., & Stem, L. A. (1993). Adaptation in dyadic interaction: Defining and operationalizing patterns of reciprocity and compensation. Communication Theory, 4, 293-316. Burgoon, J. K., Ebesu, A. S., White, C. H., Koch, P, Kikuchi, T., & Alvaro, E. (1998). The multiple faces of interaction adaptation. In M. Palmer & G. Barnett (Eds.), Progress in communication sciences: Theory and research in mutual influence (Vol. 14, pp. 191-220). Stamford, CT: Ablex. Burgoon, J. K., & Le Poire, B. A. (1993). Effects of communication expectancies, actual communication, and expectancy disconfirmation on evaluations of communicators and their communication behavior. Human Communication Research, 20, 75-107. Burgoon, J. K., Le Poire, B. A., & Rosenthal, R. (1995). Impact of expectancies, actual communication, and expectancy disconfirmation on nonverbal interaction patterns. Journal of Experimental Social Psychology, 31, 287-321. Burgoon, J. K., & Newton, D. A. (1991). Applying a social meaning model to relational message interpretations of conversational involvement: Comparing observer and participant perspectives. Southern Communication Journal, 56, 96-113. Burgoon, J. K., Stern, L. A., & Dillman, L. (1995). Interpersonal adaptation: Dyadic patterns of interaction. New York: Cambridge University Press. Burgoon, J. K., & White, C. H. (1997). Research on nonverbal message production: A view from interaction adaptation theory. In J. 0. Greene (Ed.),Message production: Advances in communication theory (pp. 279-312). Mahwah, NJ: Erlbaum. Cody, M. J., & O'Hair, H. D. (1983). Nonverbal communication and deception: Differences in deception cues due to gender and communication dominance. Communication Monographs, 50, 175-192. Dillard, J. P., Segrin, C., & Harden, J. M. (1989). Primary and secondary goals in the production of interpersonal influence messages. Communication Monographs, 56,19-38. Giles, H., Coupland, I., & Coupland, N. (Eds.). (1991). Contexts of accommodation. New York: Cambridge Press. Grice, H. P (1989). Studies in the ways of words. Cambridge, MA: Harvard University Press. Guilford, J. P. (1954). Psychometric methods (2nd ed.). New York: McGraw-Hill. Keppel, G. (1982). Design and analysis: A researcher's handbook (2nd ed.). Englewood Hills, NJ: Prentice Hall. Keppel, G. (1991). Design and analysis: A researcher's handbook Ord ed.). Englewood Hills, NJ: Prentice Hall. Le Poire, B. A., & Burgoon, J. K. (1994). Two contrasting explanations of involvement violations: Nonverbal expectancy violations theory versus discrepancy arousal theory. Human Communication Research, 20, 560-591. Miller, G. R., & Stiff, J. B. (1993). Deceptive communication. Newbury Park, CA: Sage. Montepare, J. M., & Vega, C. (1988). Women's vocal reactions to intimate and casual male friends. Personality and Social Psychology Bulletin, 14,103-112. Stiff, J. B., Kim, H. J., & Ramesh, C. (1992). Truth biases and aroused suspicion in relational deception. Communication Research, 19, 326-345. Thompson, N. S. (1986). Deception and the concept of behavioral design. In R. W. Mitchell & N. S. Thompson (Eds.), Deception: Perspectives on human and nonhuman deceit (pp. 5365). New York: State University of New York Press. Van Lear, C. A. (1991). Testing a cyclical model of communicative openness in relationship development: Two longitudinal studies. Communication Monographs, 57, 202-218. Zuckerman, M., & Driver, R. (1985). Telling lies: Verbal and nonverbal correlates of decep tion. In A. W. Siegman & S. Feldstein (Eds.), Nonverbal communication: An integrated perspective (pp. 129-147). Hillsdale, NJ: Erlbaum. CINDY H. WHITE University of Colorado, Boulder JUDEE K. BURGOON University of Arizona Cindy H. White (Ph.D., University of Arizona, 1996) is assistant professor of communication at the University of Colorado, Boulder. Judee K. Burgoon (Ed.D., West Virginia University, 1974) is professor of communication at the University of Arizona. The authors would like to thank David Buller, Sally Jackson, Calvin Morrill, and Carl Ridley for their assistance with this project. This article is part of the first author's Ph.D. dissertation, which was completed under the direction of the second author. It was recognized as a "Top 4" paper in the Interpersonal & Small Group Division at the annual meeting of the National Communication Association, Chicago, November 1997. Correspondence concerning this article should be sent to Cindy H. White, Department of Communication, University of Colorado, CB 270, Boulder, Colorado 80309; email to cindy.white@colorado.edu. Human Communication Research, Vol. 27 No. 1, January 2001 9-37 2001 International Communication Association Reproduced with permission of the copyright owner. Further reproduction or distribution is prohibited without permission. ------------------------- Another look at information management:  A rejoinder to McCornack, Levine, Morrison, and Lapinski Communication Monographs Annandale Mar 1996 -------------------------------------------------------------------------------- Authors:                  Buller, David B; Burgoon, Judee K Volume:                   63 Issue:                    1 Start Page:               92 ISSN:                     03637751 Subject Terms:            Lying                           Interpersonal communication Abstract: Buller and Burgoon consider how the information management concept in Interpersonal Deception Theory is related to Information Management Theory and address the criticisms of Steven A.  McCornack, Timothy R.  Levine, Kelly Morrison and Maria Lapinski of Buller and Burgoon's dimensional approach to deceptive messages. Copyright Speech Communication Association Mar 1996 Full Text: In this brief response to McCornack, Levine, Morrison, and Lapinski (this volume), we consider how the information management concept in Interpersonal Deception Theory (IDT) is related to Information Management Theory (IMT) and address their criticisms of our dimensional approach to deceptive messages. IDT and IMT are compatible in some respects but differ on (a) the primacy of Grice's Cooperative Principle (CP), (b) the number of conversational expectations senders can violate to produce deceptive messages, and (c) whether communicators are sensitive to these violations. We focus on three claims: Grice's cooperative principle (CP) and its four maxims represent the best theoretical explanation for information management during deception; IDT takes a decoding perspective on information management, whereas IMT takes an encoding perspective on this process; and our experimental manipulations are problematic because they exaggerated changes in information management and produced results unlike natural conversation. THE HISTORY OF INFORMATION MANAGEMENT IN IDT Our interest in information management began in 1987 when we made the distinction between strategic and nonstrategic communication (Buller & Burgoon, 199$). The first published discussion of our strategic/nonstrategic distinction occurred in Buller and Aune (1987) and permeated subsequent studies of deception (Buller, Comstock, Aune, & Strzyzewski, 1989; Buller, Strzyzewski, & Comstock, 1991). The term information management as a class of strategic behavior was coined in 1992 and subsumed behaviors that signalled uncertainty, vagueness, reticence, and withdrawal. We considered information management to be largely accomplished through verbal behavior that altered information features in deceptive messages. Our thoughts about these verbal behaviors and information features appeared in a paper presented at the 4th International Conference on Language and Social Psychology in 1991 (Buller & Burgoon, 1991) and are further explicated in our theoretical essay in Communication Theory (Buller & Burgoon, in press). We were certainly aware of McCornack's work as we were formulating our own thinking about information management. We also were strongly influenced by Bavelas, Black, Chovil, and Mullett's (1990) work on equivocation, as well as some of the studies on which McCornack also based his IMT (e.g., Hopper & Bell, 1984; Grice, 1969; Metts, 1989; Metts & Chronis, 1986; Metts & Hippensteele, 1988; Turner, Edgely, & Olmstead, 1975). It is not surprising, therefore, that the informational features we included in IDT are similar to, but not isomorphic with, IMT. Although Grice's CP and its four maxims represent conversational expectations that can be violated in deceptive messages, we did not place the CP at the center of our definition of information management. Nor did we ignore other information features previous researchers, particularly Bavelas et al. (1990), implicate in the creation of alternative forms of deception. Our approach to information management is compatible with McCornack's IMT but expands on ideas it advances. THE COOPERATIVE PRINCIPLE AND ITS MAXIMS McCornack (1992) contends that Grice's CP represents a theoretical explanation for the manipulation of information in deceptive messages and that the maxims of quantity, quality, relation, and manner represent the relevant domain. McCornack et al. (this volume) criticize the IDT approach to information management as being atheoretical because we neither posit the CP as a single unifying mechanism nor provide an alternative. IDT does offer an explanatory mechanism, as we will demonstrate. Explanatory Power of CP McCornack's (1992) use of Grice's CP does not constitute a "theoretical" explanation of how messages deceive. At best, it provides a taxonomy of four conversational expectations that communicators possess. The CP as used in IMT does not explain how, why, or under what circumstances senders flout these conversational expectations to deceive. Neither does it account for receivers' steadfast assumption that senders conform to the CP, despite disconfirming surface features of messages, or the empirical evidence that receivers do in fact detect deception. The second-order unidimensionality of the four maxims as discussed by McCornack et al. (this volume) suggests that we should treat the CP as a single superordinate expectation that is violated when senders deceive. As a superordinate expectation, the CP alone is hardly a causal mechanism. In IDT, we offer the notion of expectancy violations as a potential explanatory mechanism for understanding how deception transpires. Several theories of communication rely on expectancy violations to explain how communicators enact and interpret conversational behavior (e.g., Burgoon's 1983; Burgoon & Hale, 1988 Expectancy Violations Theory, Andersen's 1985, 1989 Cognitive Valence Theory, and Cappella and Greene's 1982 Discrepancy-Arousal Theory). Applied to deception, expectancy violations explain why receivers recognize messages as deceptive. Their relevance to the encoding side of the equation is that senders attempt to keep their violations covert. Domain of Conversational Expectations There are also problems with accepting Grice's CP as the principle that specifies the domain of conversational expectations relevant to deceptive messages. Unfortunately, McCornack (1992) does not provide a compelling rationale for accepting the CP as the only theoretical concept around which to organize conversational expectations. His primary justification for accepting the CP appears to be that Grice said the CP existed and contained the four maxims. Bowers, Elliot, and Desmond (1977) observed that deviations from these four maxims could conceivably create "devious" messages; Metts (1989) concurred. This is hardly a compelling, logical rationale for accepting the CP as the single unifying construct in information management. We disagree. As an organizing principle, Grice's CP can be questioned on two fronts. First, our work and that of Bavelas et al. (1990) suggests that the CP is too imprecise and needs to be expanded. Second, it is not clear whether the four expectations specified by the CP are equally important to deception, whether the quality expectation has primacy over quantity, relation, and manner expectations, as Jacobs, Dawson, and Brashers (this volume) argue, or whether the CP is actually a single superordinate expectation. If it does not specify the entire domain of relevant conversational expectations with precision, if the quality expectation is the primary expectation, or if it is really a single expectation, then the CP is not sufficient for organizing information management. At present, the empirical support for the primacy of Grice's four maxims is not as firm as McCornack et al. (this volume) seem to believe. Turner et al.'s (1975) work revealed only two dimensions, completeness and veridicality. Bavelas (1989; Bavelas et al., 1990) cites relevance and clarity, along with the personalization dimension included in our analysis, and a fourth dimension, the extent to which the sender acknowledges the listener, as features that can be manipulated to create equivocal statements. McCornack's studies of the four maxims do not establish their primacy because he only assessed the four dimensions. He did not measure other message features; therefore, his study could do nothing more than provide support for the four maxims (see McCornack, Levine, Solowczuk, Torres, & Campbell, 1992). Our research (Burgoon, Buller, Guerrero, Afifi, & Feldman, this volume) and Bavelas et al.'s (1990) investigations of equivocation clearly show that senders violate other conversational expectations. Unfortunately, inductively-derived theoretical explanations like IMT run the risk of overlooking important facets of a phenomenon because they are limited by the methods and approaches of previous research. The Domain of Information Management With the theoretical questions surrounding the CP unresolved and the shortcomings in the empirical investigations on the CP, it is premature to limit investigations of the dimensions of information management to those incorporated in Grice's CP. The four conversational expectations associated with the CP are far richer than McCornack (1992) implied. We distinguished between actual and apparent veridicality, information and conversational completeness, syntactic and semantic directness, and syntactic and semantic directness. We also contend that Bavelas et al.'s "personalization" expectation is an information feature managed by senders when deceiving. A presupposition of conversation is that people producing messages "own" their messages unless they explicitly disavow them. Consequently, disassociation, verbal nonimmediacy, or depersonalization reflect a central feature of discourse that can be manipulated for deceptive purposes. McCornack et al. (this volume) fault us for providing imprecise definitions for our information management dimensions. To the contrary, our dimensions expand on IMT's four maxims and, therefore, add more precision to the concept of information management. Further, we do not claim that our dimensions are orthogonal, especially the subdimensions within completeness, clarity, and directness, but neither are they completely confounded as McCornack et al. suggest. They reflect sender versus receiver or syntactic versus semantic distinctions. For instance, an utterance that appears sufficient to the receiver within the current conversational context (conversational completeness) may not provide all germane information known by the sender (informational completeness). To illustrate, the reply, "Not too bad," would be conversationally complete in a dinner-party conversation when asked, "How have you been?" But, there is much more information known to the sender that would be germane to the question. This information does not have to be articulated for the reply to be conversationally complete. Consider another example: An irrelevant reply to a question might be semantically indirect, in that the semantic content is not germane but is syntactically direct because it fits the question/answer pattern for adjacency pairs. Finally, senders reduce message clarity semantically by using terms that obfuscate (e.g., jargon or sophisticated words, such as, "In the Taoist sense, I'm more aligned with a contemporary sophist perspective") or syntactically by linguistic constructions that are indecipherable (e.g., "Accordingly, and to most, I was to have been a sophist"). Thus, these distinctions are meaningful in capturing key nuances in the production and interpretation of messages. ENCODING VERSUS DECODING PERSPECTIVE McCornack et al. (this volume) also criticize us for taking a decoding perspective on information management and claim that our primary interest in information management is as a tool for detecting deception. This is a mischaracterization. IDT considers both the encoding and decoding of deceptive messages, as well as the conjoint creation of deception by interactants. It is notable that McCornack does not consistently adhere to an encoding perspective. McCornack et al.'s (1992) test of IMT relied on a decoding methodology, involving over 1000 naive judges who evaluated surface deviations along the four maxims. Surprisingly, he now criticizes us for using a similar methodology. McCornack et al.'s mischaracterization of IDT may arise from their mistaken impression that our interest in "distinguishing" between truthful and deceptive messages reflects an interest in detection. Actually, we are interested in distinguishing between how senders construct truthful and deceptive messages, receivers discriminate between them, and scholars theorize about them. This mischaracterization also appears to stem from a fundamental disagreement between IMT adherents and us about whether communicators are sensitive to violations of conversational expectations. In relying on Grice's CP, McCornack et al. (this volume) view maxims as tacit assumptions about the principles that guide cooperative and rational exchanges. Communicators supposedly fail to recognize surface deviations from conversational expectations and assume instead that senders are being cooperative. We believe the opposite: Communicators are sensitive to these surface deviations and at times conclude that deception occurred. The empirical evidence favors our position rather than Grice and McCornack's assertion to the contrary. It indicates that people do detect differences between honest and deceptive messages (Burgoon, Buller, Ebesu, & Rockwell, 1994; DePaulo, in press; DePaulo, Stone, & Lassiter, 1985). Even McCornack's own test of IMT (see McCornack et al., 1992) revealed different perceptions of honesty according to which information dimensions are manipulated. McCornack (1992) raises the question of whether people can detect these covert violations. IDT says yes, his own work implies the answer is yes, and other evidence suggests that deception is noticeable in the degree to which messages are deviant or out of the ordinary (see, e.g., Bond, Omar, Pitre, Lashley, Skaggs, & Kirk, 1992; Fiedler & Walka, 1993). McCornack et al. (this volume) claim that our position rests on the assumption that deceptive messages necessarily entail substantial and noticeable changes in information and that recipients always judge such changes as deceptive. We make no such assumptions. Information management can include a wide range of adjustments, some of which are quiet, unostentatious, and covert (to use McCornack et al.'s language), others of which are substantial. These changes need not result in a judgment of deception, but the larger the deviation and the more dimensions entailed, the more likely that suspicion and the possibility of deceit will be entertained. McCornack et al. (this volume) base some of their criticisms of our position on Fiedler and Walka's (1993) assertion that people rely on conventionalized rules to judge the honesty of messages rather than figuring out the validity of behavioral cues. We are surprised by this, inasmuch as we interpreted Fiedler and Walka's position as consistent with ours. Fiedler and Walka proposed that receivers attribute deception when nonverbal behavior is conspicuous, which we believe occurs when deceivers violate nonverbal expectations. Their infrequency rule (i.e., less frequent answers are judged to be more dishonest than more frequent answers) and verifiability rule (i.e., objectively verifiable information is more likely to be judged as deceptive than subjective information that is difficult to verify) also require that receivers attend to the plausibility and type of information provided in messages. That is, conventionalized rules require that receivers attend to message features, and receivers use these rules to assess messages. Ultimately, these assessments help receivers discriminate between messages. We wonder whether McCornack et al. would accept our information management dimensions if we called them "conventionalized rules" or heuristics. Finally, by invoking Fiedler and Walka's idea of conventionalized rules, McCornack et al. have conceded that people have cognitive templates for judging deceptive and truthful messages. Consequently, there must be an explanation accounting for when, how, and why receivers opt to ascribe deception rather than cooperation, or even incompetence, to deviations in message features. Any explanation for these issues is likely to be more consistent with IDT than with IMT's claim about the tacitness of expectations. EXPERIMENTALLY MANIPULATING TYPES OF DECEPTION McCornack et al. (this volume) criticize two aspects of our experimental methodology. They assert that our experimental manipulations exaggerated changes on the information management dimensions and made them noticeable to observers, but in natural conversations such changes would be subtle and not noticed. This assertion is open to empirical verification and several sources of evidence dispute the criticism. McCornack's own test of IMT (McCornack et al., 1992) revealed that changes produced by hypothetical circumstances rather than experimental instructions were noticed by observers. Also, our secondary analysis of equivocal messages from Bavelas et al. (1990), which were elicited naturally by experimental circumstances rather than by experimental instructions, produced changes in information management dimensions that were noticeable to observers. Thus, the convergent evidence from three experimental manipulations-hypothetical circumstances, experimental circumstances, and experimental instructions-all support our conclusion that changes in information management dimensions are recognized by senders and receivers. McCornack et al. (this volume) further fault our results on the grounds that we instructed participants to encode messages that deviated on the dimensions we measured. Implicit in their criticism, once again, is the belief that messages in natural conversation are different from the ones we created experimentally. Beside the fact that the convergence of evidence we cited in the preceding paragraph undercuts this criticism, we believe that McCornack et al. miss the mark in other respects. We opted to alter information features by describing in natural language three forms of deception commonly manipulated and recognized by senders and measuring the ways senders then varied information. We did not, however, describe to senders all of the information features we measured, nor did we provide them with detailed descriptions of the dimensions. Also, we never commented on these dimensions when senders practiced their messages prior to experimental conversations. As we noted, senders could have ignored our instructions or found it impossible to manipulate the information features we briefly described in the instructions, but this was not the case. SUMMARY Our approach to information management in IDT is compatible with McCornack's IMT. However, our disagreements concerning the primacy of Grice's CP, the relevant conversational expectations, and communicators' sensitivity to expectancy violations produce points of departure in our approaches. Additionally, unlike IMT, our information managements dimensions are intended to underlie both the encoding and decoding of messages and to apply to truthful as well as deceitful discourse. IMT does not provide an explanation for how senders create deceptive messages, how listeners, upon detecting surface deviations, judge sender honesty, or how messages actually deceive. IDT's notions of strategic manipulation of information and expectancy violations are a promising starting point in explaining this process. It is clear that information management in deception is an important area for theoretical development and empirical testing. McCornack and his colleagues have been instrumental in bringing it to the attention of deception researchers. Our approach to information management in IDT, like McCornack's IMT, describes properties entailed in information management during deception. Theoretical exchanges, such as this one, help clarify and extend the idea of information management in both deceptive message production and processing. REFERENCES Andersen, P.A. (1985). Nonverbal immediacy in interpersonal communication. In A.W. Siegman & S. Feldstein (Eds.), Multichannel integrations of nonverbal behavior(pp. 1-36). Hillsdale, NJ: Erlbaum. Andersen, P.A. (1989, May). A cognitive valence theory of intimate communication. Paper presented at the annual meeting of the International Network on Personal Relationships, Iowa City, IA. Bavelas, J.B. (1989, May). Treating deception as discourse. Paper presented at the annual meeting of the International Communication Association, San Francisco, CA. Bavelas, J.B., Black, A., Chovil, N., & Mullet, J. (1990). Equivocal communication. Newbury Park, CA: Sage. Bond, C.R., Omar, A., Pitre, U., Lashley, B.R., Skaggs, L.M., & Kirk, C.T. (1992). Fishy-looking liars: Deception judgment from expectancy violation. Journal of Personality and Social Psychology, 63, 969-977. Bowers, J.W., Elliot, N.D., & Desmond, R.J. (1977). Exploiting pragmatic rules: Devious messages. Human Communication Research, 3, 235-242. Buller, D.B., & Aune, R.K. (1987). Nonverbal cues to deception among intimates, friends, and strangers. Journal of Nonverbal Behavior, 11, 269-290. Buller, D.B., & Burgoon, J.K. (1991, August). The language of interpersonal deception: Falsification, equivocation, and concealment. Paper presented at the 4th International Conference on Language and Social Psychology, Santa Barbara, CA. Buller, D.B., & Burgoon, J.K. (1994). Deception: Strategic and nonstrategic communication. In J.A. Daly & J.M. Wiemann (Eds.), Strategic interpersonal communication(pp. 191-223). Hillsdale, NJ: Erlbaum. Buller, D.B., & Burgoon, J.K. (in press). Interpersonal deception theory. Communication Theory. Buller, D.B., Comstock, J., Aune, R.K., & Strzyzewski, K.D. (1989). The effect of probing on deceivers and truthtellers. Journal of Nonverbal Behavior, 13, 155-169. Buller, D.B., Strzyzewski, K.D., & Comstock, J. (1991). Interpersonal deception: I. Deceivers' reactions to receivers' suspicions and probing. Communication Monographs, 58, 1-24. Burgoon, J.K. (1983). Nonverbal violations of expectations. In J.M. Wiemann & P.P. Harrison (Eds.), Nonverbal interaction (pp. 77-111). Beverly Hills, CA: Sage. Burgoon, J.K., Buller, D.B., Ebesu, A., & Rockwell, P. (1994). Interpersonal deception: V. Accuracy in deception detection. Communication Monographs, 61, 303-325. Burgoon, J.K., Buller, D.B., Guerrero, L.K., Afifi, W.A., & Feldman, C.M. (this volume). Interpersonal deception: XII. Information management dimensions underlying types of deceptive messages. Communication Monographs. Burgoon, J.K., & Hale, L. (1988). Nonverbal expectancy violations: Model elaboration and application to immediacy behavior. communiCation Monographs, 55, 58-79. Cappella, J.N., & Greene, J.O. (1982). A discrepancy-arousal explanation of mutual influence in expressive behavior for adult and infant-adult interaction. Communication Monographs, 49, 89-114. DePaulo, B.M. (in press). Detecting deception. Current Directions in Psychological Science. DePaulo, B.M., Stone, J.I., & Lassiter, G.D. (1985). Deceiving and detecting deceit. In B.R. Schlenker (Ed.), The self and social life (pp. 323-370). New York: McGraw-Hill. Fiedler, K, & Walka, I. (1993). Training lie detectors to use nonverbal cues instead of global heuristics. Human Communication Research 20, 199-223. Grice, P. (1989). Studies in the way of words. Cambridge, MA: Harvard University Press. Hopper, R., & Bell, R.A. (1984). Broadening the deception construct. Quarterly Journal of Speech, 70,288-302. Jacobs, S., Dawson, E., & Brashers, D. (this volume). Information manipulation theory: A replication and assessment Communication Monographs. McCornack, S.A. (1992). Information manipulation theory. Communication Theory, 59, 1-16. McCornack, S.A., Levine, T.R., Morrison, K., & Lapinski, M. (this volume). Speaking of information manipulation: A critical review of research addressing deceptive messages. Communication Monographs. McCornack, S.A., Levine, T.R, Solowczuk, K., Torres, H.I., & Campbell, D.M. (1992). When the alteration of information is viewed as deception: An empirical test of information manipulation theory. Communication Monographs, 59, 17-29. Metts, S. (1989). An exploratory investigation of deception in close relationships. Journal of Social and Personal Relationships, 6, 159-180. Metts, S., & Chronis, H. (1986, October). An exploratory investigation of relational deception. Paper presented at the annual meeting of the International Communication Association, Chicago, IL. Metts, S., & Hippensteele, S. (1988, February). Characteristics of deception in close relationships. Paper presented at the annual meeting of the Western Speech Communication Association, San Diego, CA. Turner, R.E., Edgely, C., & Olmstead, G. (1975). Information control in conversations: Honesty is not always the best policy. Kansas Journal of Sociology, 11, 69-89. Reproduced with permission of the copyright owner. Further reproduction or distribution is prohibited without permission. =============================== End of Document ================================