Information manipulation theory and perceptions of deception in Hong Kong Communication Reports Salt Lake City Winter 1999 -------------------------------------------------------------------------------- Authors: Lorrita N T Yeung Authors: Timothy R Levine Authors: Kazuo Nishiyama Volume: 12 Issue: 1 Pagination: 1-11 ISSN: 08934215 Subject Terms: Communication Theory Information Culture Geographic Names: Hong Kong Abstract: This study tests McCornack's (1992) Information Manipulation Theory (IMT) in Hong Kong. IMT views deception as arising from covert violations of one or more of Grice's four maxims (quality, quantity, relevance, and manner). Copyright Western States Communication Association Winter 1999 Full Text: This study tests McCornack s (1992) Information Manipulation Theory (IMT) in Hong Kong. IMT views deception as arising from covert violations of one or more of Grice's four maxims (quality, quantity, relevance, and manner). Previous studies conducted in the United States have found that messages violating one or more of the four maxims are rated as less honest than messages that do not violate the maxims. Based upon cultural differences in expectations and social roles, we predicted that only violations of quality (i.e., outright falsification) would be seen as universally deceptive. To test this prediction, McCornack, Levine, Solowczuk, Torres and Campbell's (1992) original study was replicated in Hong Kong (N = 310). The results indicated that violations of quality (falsification) and relevance (evasion) were rated as deceptive in Hong Kong. However, message ratings along all four dimensions were significantly correlated with deception ratings, suggesting that perhaps the results stem from differences in what counts as a covert violation rather than more fundamental differences in the appropriateness of the maxims. The prevalence of deception in everyday conversation is well documented (DePaulo, Kashy, Kirkendol, Wyer, & Epstein, 1996; Turner, Edgley, & Olmstead, 1975), as are the potentially negative consequences stemming from the discovery of deception (e.g., McCornack & Levine, 1990). Although research on deception processes has long flourished, researchers have traditionally focused on the falsification of information to the exclusion of more subtle forms of deceptive messages (McCornack, 1992). Lies (i.e., presenting false information), however, represent only one of many ways to deceive another (Bowers, Elliott, & Desmond, 1977; Ekman,1985; Hopper & Bell,1984; Turner et al.,1975). By limiting deception only to those acts involving the falsification of information, many verbal acts that are functionally deceptive (e.g., equivocation: Bavelas, Black, Chovil, & Mullett, 1990; evasion: Galasinski, 1994; Turner et al. 1975) are excluded. While investigations of deceptive message design are currently in vogue (e.g., Bavelas et al., 1990; Burgoon, Buller, Guerrero, Afifi, & Feldman, 1996; Jacobs, Dawson, & Brashers, 1996; Galasinski, 1994; McCornack, Levine, Solowczuk, Torres, & Campbell, 1992), this research has characteristically been concerned only with deception in Western cultures. This cultural myopia is unfortunate. Because of cultural differences related to the individualism-collectivism distinction, what counts as deception most likely differs across cultures. With intercultural interactions becoming increasingly frequent due to advances in communication technology and increases in mobility, differing views of deception may increase the potential for misunderstanding, mistrust, and ill will. It would seem, then, that a crosscultural examination of deceptive message design is needed. The current study examines the generalizability of Information Manipulation Theory (IMT; McCornack, 1992) by attempting to replicate McCornack et al.'s (1992) seminal work in Hong Kong. This investigation begins with a review of research on information manipulation. INFORMATION MANIPULATION THEORY IMT offers a multidimensional approach to deceptive messages, integrating Grice's (1989) theory of conversational implicature with research on deception as information control (e.g., Bavelas et al., 1990; Bowers et al., 1977; Metts, 1989; Turner et al., 1975). Specifically, IMT uses Grice's (1989) Cooperation Principle (CP) and its maxims as a framework for describing a variety of deceptive message forms. IMT views deception as arising from covert violations of one or more of Grice's four maxims (quality, quantity, relevance, and manner). Covert violations of quality involve the falsification of information. Covert violations of quantity can result in "lies of omission." Deception by evasion involves covert violations of relevance, and deception by equivocation results from the covert violation of manner. IMT also offers a pragmatic explanation for why deceptive messages deceive. As McCornack (1992) wrote: It is the principal claim of Information Manipulation Theory that messages that are commonly thought of as deceptive derive from covert violations of the conversational maxims... Because the violation is not made apparent to the listener, the listener is misled by her/his assumption that the speaker is adhering to the CP and its maxims. (p. 5-6) Thus, covert violations of one or more of Grice's conversational maxims (quality, quantity, relevance, and manner) are believed to result in messages that are functionally deceptive. To date, there have been three tests of IMT: McCornack et al.'s (1992) original study and replications by Jacobs et al., (1996) and Lapinski (1995). Each of the studies provided subjects with a hypothetical situation and one of five message forms. Subjects were asked to rate the messages in terms of honesty. The messages either violated one of Grice's maxims (quality, quantity, relevance, and manner) or were baseline messages designed to be honest (i.e., accurate, informative, clear, and relevant). In each of the three studies, the baseline message was rated as significantly more honest than the four messages violating one of the maxims. That is, each of these three studies found that messages violating one or more of the four maxims are seen as more deceptive than are messages that adhere to Grice's maxims. Research on Culture and Deception At a very general level, the concept of deception may be universal. Saarni and Lewis (1993), for example, argued that deception that is centered around clandestine affairs, protecting one's possessions from a competitor, and feigning emotion occur in most if not all cultures. Similarly, Buss and Schmitt (1993) implied that deceptive sexual selection strategies might be crosscultural. The few cross-cultural studies of deception that exist, however, tend to focus on cultural differences. O'Hair, Cody, Wang, and Choa (1990) investigated vocal stress in the truthful and deceptive messages of Chinese immigrants. Chinese had higher levels of vocal stress when revealing negative emotions. Aune and Waters (1994) found that the more collectivistic American Samoan participants indicated they would be more likely to deceive another on an issue related to family or other ingroup concerns. U.S. Americans, in contrast, were motivated to deceive when they felt an issue was private or when they wanted to protect the target person's feelings. Nishiyama (1994) discussed deception in a cultural framework from a business perspective. Nishiyama suggested that there are a number of strategies and behaviors that are considered everyday business practices in Japan that may be interpreted as deceptive by U.S. American businesspeople. Commonly misunderstood messages include official statements of policy (Tatemae), which are different from true intentions (Honne), and certain nonverbal behaviors that non-Japanese people find difficult to distinguish. Finally, Lapinski (1995) investigated the relationships between honesty ratings of the four information manipulation dimensions and self-reported cultural orientations. Relevance violations were seen as significantly less deceptive by those with a more collectivistic orientation. China-U.S. Cultural Differences and Deception The individualism-collectivism dimension (Hofstede, 1980) is perhaps the most common way of distinguishing between cultures. Collectivism emphasizes the goals of the ingroup over those of the individual. Asia, Africa, South America, and the Pacific Islands have generally been considered to be the locales of collectivistic cultures. Conversely, personal goals are emphasized in individualistic cultures. Those cultures that are characterized by individualism include many of those located in Australia, northern and western Europe, and the United States. Several studies offer empirical support for the Chinese collectivist orientation (e.g., Bond and Kwang-kuo, 1986). There are five reasons why the Chinese collectivist orientation should result in views of deception that diverge from those of Westerners. The first reason involves cultural differences in role expectations. The Chinese have a strong tendency to act according to what is expected of them by others. The Chinese concept of man, "ren," is said to be "based on the individual's transactions with his fellow human beings" (Bond & Kwang-kuo, 1986, p.220). With this expectation comes a greater concern for the feelings of others in social interactions and a greater stress on the obligations demanded by the social role. Thus, in interacting with others, Chinese tend to give responses that fulfill the social expectations of others, even if those responses are considered deceptive by Western standards. Second, cultural differences exist in conflict avoidance. There is both theoretical and empirical support for the Chinese tendency to avoid conflict, compared with Westerners (Tang & Kirkbride, 1986). Because deception is an easy means of conflict avoidance, deceptive messages serving this purpose may be more common among Chinese than Westerners. Third, moral orientations differ between cultures. Chinese tend to base their moral decisions on what they think is acceptable to their reference group (Yang, 1986, p. 133). In contrast, Westerners more often form moral judgments according to independently-held principles. Instead of being universally applicable like Western moral principles, Chinese moral judgments are related to particular roles and are therefore situational (Chiu, 1991). It appears that role expectation is a predominant criterion among Chinese in forming such judgments (Chiu, 1990). For example, sometimes the need to act according to role expectation and concern for the other's feelings, on the one hand, and the moral responsibility to be honest, on the other, are in conflict. Manipulating message features by giving ambiguous or partial information would appear to be a compromise to prevent the eventuality of being a total liar. Fourth, the concept of "face" is especially salient in the Chinese culture (Chang & Holt, 1994). To the Chinese, the concept of face incorporates two different notions: "lien" and "mianzi" (Hu, 1944). "Lien" refers to the integrity of a person's moral character. "Mianzi," however, is the personal prestige and reputation that comes with the person's achievement and success in society. Both are essential for the operation of the Chinese as a social being. Being socially skilled involves knowing how to do facework (e.g., giving face, enhancing face, saving face, restoring face, etc.). Many Chinese message strategies are motivated by the wish to do facework. To Westerners, such message strategies may appear unnecessarily indirect. For example, Chinese have a preference for refraining from criticism, especially in public (Bond & Lee, 1981). Jokes and hints are commonly used for broaching unfavorable information to the hearer (Du, 1995). Refusals to invitations are used only rhetorically for sounding out the sincerity of the inviter as well as avoiding being presumptuous by accepting invitations too readily (Gu, 1990). Such indirect message strategies can also be considered as violations of the conversational maxims and thus be construed as "deceptive." Finally, the Chinese use of ritualistic message strategies may appear to be deceptive in the eyes of Westerners. For example, initial refusals to invitations are not meant to be, and are in fact not taken as, flat denials in the Chinese context. But they may be interpreted literally by Westerners, thus leading to cases of cross-cultural misunderstanding (Kasper & Zhang, 1995). Similarly, the ritualistic strategy of exaggeration is often used among Chinese in doing facework. For example, exaggerated statements to praise the accomplishments of the other and to denigrate the achievements of oneself are frequently used in playing the face game. Hypotheses The integration of the theoretical distinctions between Chinese and Western communicative styles with the research on cross-cultural deception leads to predictions regarding perceptions of information manipulation among Chinese in Hong Kong. We anticipate that blatant violations of quality (i.e., outright intentional falsification) will be universally seen as deceptive. Falsification is the most direct and blatant form of deception and is therefore the most likely to be universally seen as deceptive. Differences should exist, however, in the perceptions of more subtle forms of deception (i.e., violations of quantity, relevance, and manner). Omission, equivocation, and evasion involve indirectness, which may be typical of normal, honest communication in Asian cultures. Further, because violations of these maxims are required to fulfill expectations and social roles, these should not be seen as particularly deceptive in Hong Kong. Hence, we hypothesize that there will be a main effect for violation type such that violations of quality will be seen as significantly more deceptive than other message forms, and violations of quantity, relevance and manner will not differ from the baseline honest message. If the data are consistent with our hypothesis, two explanations for these findings will be possible. First, IMT may simply not hold in collectivist cultures because violations of quantity, relevance and manner are not seen as constituting deception. Consistent with this speculation, Yum (1988) argued that the maxim of manner is not the norm in East Asia. However, an alternative explanation for the expected results is also possible. It may be that covert violations of these dimensions are seen as deceptive, but what counts as a violation differs with culture. That is, if a message that violates a given maxim in North America is not seen as deceptive in Hong Kong, it could be because either that maxim does not apply, or because the message in question is not perceived as a covert violation of that maxim. For this reason, we will also investigate the relationship between message ratings of the four dimensions and perceptions of honesty. METHOD Participants Three hundred and ten undergraduate students in Hong Kong completed a survey very similar to that used by McCornack et al. (1992). Of these, 139 (44.5%) were male, 169 (54.5%) were female, and 2 (0.6%) failed to answer the sex question. The respondents ranged in age from 18 to 25 (M = 21.22, SD = 7.70). Differences between the current method and that of McCornack et al. (1992) are noted below. Design and Procedures A 1 X 5 independent groups design was used. As in previous IMT studies (Jacobs et al., 1996; Lapinski, 1995; McCornack et al., 1992), subjects were presented with a hypothetical situation and one of five messages: a baseline honest message (no violation), a false message (quality violation), a message omitting crucial information (quantity violation), an evasive message (relevance violation), or an equivocal message (manner violation). The situation and messages were taken from the "Committed Chris" situation used by McCornack et al. (1992). Participants read the situations and then rated the message on a 3-item honesty scale (alpha = .86). Manipulation check items for each violation type were also completed (quality: alpha = .84; quantity: alpha = .76; relevance; alpha = .78; manner, alpha = .85). All items were taken from McCornack et al. (1992), but they used 4 items. The results of a confirmatory factors analysis were consistent with the validity of the measures. The scales were second-order unidimensional, replicating McCornack et al.'s (1996) re-analyses of McCornack et al. (1992) and Jacobs et al. (1996). The questionnaire was completed in English. As English was the medium of instruction in their college, the respondents were thought to have had a sufficient level of proficiency in English. Nevertheless, a glossary of key terms was provided and additional verbal explanations were given in order to ensure that the participants had a complete understanding of the questionnaire. They were also given a chance to ask for further clarifications of any points they did not understand. RESULTS The manipulation checks were successful. The message violating quantity (M = 3.77) was rated as significantly less disclosive (tl25 = 2.92; p < ..004; r = .25) than the baseline message (M = 4.50). The message violating quality (M = 3.39) was rated as less accurate (tl23 = 7.47;p < .001; r = .56) than the baseline message (M = 4.90). Scores on the relevance violation condition (M = 3.50) were rated significantly lower (tl2l = 5.94; p < .001; r = .48) than the baseline condition (M = 4.80). Finally, violations of manner (M = 3.90) were rated as less clear (tl22 = 3.42; p < .001; r = ..30) than the baseline message (M = 4.85). The violation manipulation produced a statistically significant and large main effect (F. = 22.54; p < .0001; eta2 = .23). Scheffe tests showed that violations of quality and relevance were rated as significantly more deceptive than the baseline message, while violations of quantity and manner were not rated as any more deceptive than the baseline message. No differences were found between the baseline message, the quantity violation, and the manner violation. Similarly, quality violations did not differ from violations of relation. Statistical power for r > .20 was .70 and was .95 for r > ..30. Means are presented in Table 1. DISCUSSION This study tested McCornack's (1992) Information Manipulation Theory (IMT) in Hong Kong. Only violations of quality (falsification) and relevance (evasion) were rated as more deceptive than the baseline message in Hong Kong. Messages violating quantity (omission) and manner (equivocation) did not differ from the completely honest message in deception ratings. Simply put, false and evasive messages were rated as deceptive but omissions and equivocation were not. These results differ dramatically from those obtained in the United States. Comparing the current results to the findings of previous studies conducted in the U.S. (Michigan: McCornack et al., 1992; Arizona: Jacobs et al., 1996; Hawaii: Lapinski, 1995), the Hong Kong students appear to rate each of the message types differently than their U.S. American counterparts (see Table 1). When statistically compared to McCornack et al.'s (1992) original results, the means in each condition differed significantly. The current respondents rated violations of quality (t267 = 10.67; p < .01; r = .55), quantity t279 = 3.26; p < .01; r = .19), manner (t274 = 5.51;p < .01; r = .32), and relevance (t271 = 3.06; p < .01; r = .18) as less deceptive than did McCornack et al.'s (1992) subjects. Alternatively, the honest baseline message was rated as more deceptive by the Hong Kong sample than the U.S. sample, (t278 = -3.33; p < .01; r = .20). Simply put, what is seen as truthful and deceptive appears to vary substantially across cultures. As mentioned previously, there are at least two general explanations for the results. First, the fundamental expectations that guide conversational understanding (i.e., Grice's Maxims) in Western cultures may not generalize to Hong Kong. This explanation holds that violations of quantity and manner are not seen as deceptive in Hong Kong because the Chinese do not expect others to adhere to the maxims of clarity and disclosure. If the maxims do not apply, one should not be expected to follow them. [IMAGE TABLE] Captioned as: TABLE 1 An alternative explanation is that each of the maxims holds in Hong Kong, but what counts as a violation differs between Hong Kong and the United States. That is, the differences lie not in the fundamental assumptions that guide conversations but in what is required to fulfill each maxim. To explore these two explanations, message honesty ratings were correlated with ratings of quality, quantity, relevance, and manner. As the correlations in Table 2 show, messages that were rated as violations also tended to be rated as dishonest (r = .43 to .67). So, for example, although the message involving omission was not rated as any more deceptive than the baseline message, the more messages were perceived as omitting information, the less honest they were rated. This suggests that violations of the four maxims may be seen as deceptive in Hong Kong, but that what counts as a violation differs from the U.S. To the extent that this second explanation is valid, IMT may have some validity in non-Western cultures, or at least in Hong Kong. While differences clearly exist in the ratings of specific messages, perceived violations are associated with perceived deception. In other words, Hong Kong Chinese may not have the same strict standards as U.S. Americans do for absolute clarity and full information disclosure in the messages received. It should be noted, however, that the correlations between violation ratings and honesty rating were generally lower than those obtained in the U.S. The same correlations in McCornack et al. (1992) were generally larger (r = .57 to .79). When tested for significant differences with r to z transformations, the correlations for the quantity (r = .67 vs. .43; z = 5.42), quality (r = .79 vs. .67; z = 4.00), and manner (r = .62 vs. .52; z = 2.27) were significantly larger in the McCornack et al. (1992) data than in the current data. No differences were evident in the relevance correlations (r = .57 and .54; z = 0.69). The dishonesty ratings of the evasive message were rather surprising and inconsistent with our predictions. At least two explanations are plausible. First, the relevance findings might stem from the particular situation used in the questionnaire. In an intimate relationship, the role expectation might require a direct address of the critical issue when a question is raised about one's fidelity and commitment to a relationship. An irrelevant answer tends to be interpreted as having something to hide, thus leading to an interpretation of dishonesty. In other words, the perceptions of deception by Hong Kong Chinese about the irrelevant message might not generalize to other situations. [IMAGE TABLE] Captioned as: TABLE 2 Second, the finding might not be an artifact, and Hong Kong Chinese might have less tolerance for relevance violations than quantity or manner violations. Two findings seem consistent with this account. Previous IMT studies (cf., Lapinski, 1995; McCornack et al., 1992) have found little situational variation in deceptive message ratings. Also, in the current study, the manipulation check indicated stronger manipulations of relevance and quality than quantity or manner. Together, these findings might indicate that Hong Kong Chinese are more sensitive to violations of relevance and quality than they are to violations of quantity or manner, and these difference are not a mere function of situational idiosyncrasies. A further study incorporating different situations would be necessary to test this reasoning. In general, the results of the present study indicate that there are different cultural expectations regarding violations of conversational maxims leading to perceptions of deceptive messages. Such differences in expectations may lead to cross-cultural misunderstandings. Apparently, Hong Kong Chinese have a higher threshold of tolerance for violations of conversational maxims as compared with U.S. Americans. Hong Kong Chinese should not be seen as more deceptive than U.S. Americans. Instead, they may use message manipulation strategies to avoid hurting the other's feelings or to fulfill to social obligations and expectations. Such violations may not be intended to be covert. For example, they often leave certain things unsaid, expecting the others to read between the lines. To U.S. Americans, such violations of the conversational maxims would be seen as covert and thus would constitute an act of dishonesty. Under such circumstances, the U.S. Americans would take the partial or ambiguous messages coming from the Chinese as intentionally deceptive. The Chinese, on the other hand, may be upset and embarrassed by the U.S. American style of directness, as the U.S. Americans do not seem to give due consideration to face. This may explain why the Hong Kong Chinese rated the baseline message as less honest than did those in the U.S. While this study was designed from the perspective of IMT, other research might legitimately study deception from a more culture specific framework. For example, the current study might have been grounded in the culture of Hong Kong Chinese. Such an approach would likely yield additional important insights, and would be more ideally suited to the study of the nuisances of deceptive discourse in Hong Kong. Further research on cultural differences in deception is of course needed. This study only examined deception in one situation, with one set of message examples, in just one culture, and from the perspective of IMT. English speaking Hong Kong Chinese may be more Westernized than some other Asian populations, and a dating situation might have reduced applicability. Nevertheless, this study provides some preliminary insights into cultural differences in the perceptions of message honesty. REFERENCES Aune, R. K., Sr Waters, L. L. (1994). Cultural differences in deception: Motivations to deceive in Samoans and North Americans. International Journal of Intercultural Relations, 18, 159-172. Bavelas, J. B., Black, A., Chovil, N., & Mullett, J. (1990). Equivocal communication. Newbury Park, CA: Sage. Bond, M. H. & Kwang-kuo, H. (1986). The social psychology of Chinese people. In M. Bond (Ed.), The psychology of the Chinese people. Hong Kong: OUR Bond, M. H., & Lee, P. W. H. (1981). Face saving in Chinese culture: A discussion and experimental study of Hong Kong students. In M. Bond (Ed.), Social life and development in Hong Kong. Hong Kong: The Chinese University Press. Buss, D. M., & Schmitt, D. P. (1993). Sexual strategies theory: An evolutionary perspective on human mating. Psychological Review, 100, 204-232. Bowers, J. W., Elliott, N. D., & Desmond, R. J. (1977). Exploiting pragmatic rules: Devious messages. Human Communication Research, 3, 235-242. Burgoon, J. K., Buller, D. B., Guerrero, L. K., Afifi, W. A., & Feldman, C. M. (1996). Interpersonal deception XII: Information management dimensions underlying deceptive and truthful messages. Communication MonoQrahs 69. Chang, H-C, & Holt, R. (1994). A Chinese perspective on face as inter-relational concern. In S. Ting-Tooney (Ed.), The challenge of facework: Cross-cultural and interpersonal issues. Albany, NY: State University of New York Press. Chiu, C-Y. (1990). Role expectation as the Principal criterion in justice judgment among Hong Kong Chinese students. TheJournal of Psychology, 125(5), 557-565. Chiu, C-Y. (1991). Hierarchical social relations and justice judgment among Hong Kong Chinese college students. TheJournal of Social Psychology, 131(6), 885-887. DePaulo, B. M., Kashy, D. A., Kirkendol, S. E., Wyer, M. W., & Epstein, J. A. (1996). Lying in everyday life. Journal of Personality and Social Psychology; 70, 979-995. Du, J. S. (1995). Performance of face-threatening acts in Chinese complaining, giving bad news, and disagreeing. In G. Kasper (Ed.), Pragmatics of Chinese as native and target language. Honolulu: University of Hawaii Press. Ekman, P. (1985). Telling lies. NY: Berkley Books. Galasinski, D. (1994). Deception. Linguist's perspective. Paper presented at the International Communication Association convention, Sidney, Australia. Grice, P (1989). Studies in the way of words. Cambridge, MA: Harvard University Press. Gu, Y. (1990). Politeness phenomena in modern Chinese. Journal of Pragmatics, 14, 237-257. Jacobs, S., Dawson, E. J, & Brashers, D. (1996). Information manipulation theory: A replication and assessment. Communication Monographs, 63, 82. Hofstede, G. (1980). Culture's consequences: International differences in work related values. Newbury Park: Sage. Hopper, R., & Bell, R. A. (1984). Broadening the deception construct. Quarterly Journal of Speech, 70, 288-302. Hu, H. C. (1944). The Chinese concept of 'face'. American Anthropologist, 46, 45-65. Kasper, G., & Zhang, Y. (1995). It's good to be a bit Chinese: Foreign students' experience of Chinese pragmatics. In G. Kasper (Ed.), Pragmatics of Chinese a native and target language. Honolulu: University of Hawaii Press. Lapinski, M. K. (1995). Deception and the self A cultural examination of Information Manipulation Theory. Unpublished master's thesis. Honolulu, HI: University of Hawaii. McCornack, S. A. (1992). Information manipulation theory. Communication Monographs, 59, 1-16. McCornack, S. A., & Levine, T. R. (1990). When lies are uncovered: Emotional and relational outcomes of discovered deception. Communication Monographs, 57, 119-138. McCornack, S. A., Levine, T. R., Morrison, K., & Lapinski, M. (1996). Speaking of information manipulation: A critical rejoinder. Communication Monographs, 63, 8991. McCornack, S. A., Levine, T. R., Solowczuk, K. A., Torres, H. I., & Campbell, D. M. (1992). When the alteration of information is viewed as deception: An empirical test of information manipulation theory. Communication Monographs, 59, 17-29. Metts, S. (1989). An exploratory investigation of deception in close relationships. Journal of Social and Personal Relationships, 6, 159-179. Nishiyama, K. (1993). Japanese negotiators: Are they deceptive or misunderstood? Paper presented at the 23rd annual convention of the Communication Association of Japan. O'hair, D., Cody, M. J., Wang, X. T., & Chao, E. Y. (1990). Vocal stress and deceptions detection among Chinese. Communication Quarterly, 38, 158-169. Saarni, C., & Lewis, M. (1993). Deceit and illusion in human affairs. In M. Lewis & C. Saarni (Eds.), Lying and deception in everyday life. New York: The Guilford Press. Tang, S. F Y., & Kirkbride, P. S. (1986). The development of conflict handling skills in Hong Kong Some cross-cultural issues. Working Paper No. 7, City Polytechnic of Hong Kong. Turner, R. E., Edgley, C., & Olmstead, G. (1975). Information control in conversations: Honesty is not always the best policy. KansasJournal of Sociology, 11, 69-89. Yang, K-S. (1986). Chinese personality and its change. In M. Bond (Ed.). The psychology of the Chinese people. Hong Kong: OU. Yum, J. O. (1988). The impact of Confucianism on interpersonal relationships and communication patterns in East Asia. Communication Monographs, 55, 374-388. LORRITA N. T. YEUNG, TIMOTHY R. LEVINE, and KAZUO NISHIYAMA Lorrita N. T. Yeung (Ph.D., Macquarie University [Australia], 1994) is an associate professor and Director of the Language Centre at Lingnan College, Hong Kong. Timothy R. Levine (Ph.D., Michigan State University, 1992) is an associate professor and Kazuo Nishiyama (Ph.D., University of Minnesota, 1970) is a professor in the Department of Speech at the University of Hawaii at Manoa. Reproduced with permission of the copyright owner. Further reproduction or distribution is prohibited without permission. ---------------------------------------------- International deception Personality and Social Psychology Bulletin Thousand Oaks Mar 2000 Authors: Charles F Bond Jr Authors: Adnan Omar Atoum Volume: 26 Issue: 3 Pagination: 385-395 ISSN: 01461672 Subject Terms: Lying Cross cultural studies Cognition & reasoning Social psychology Abstract: Three studies of international deception involving Americans, Jordanians and Indians were conducted. Ancillary results reveal that people from diverse backgrounds reach consensus in deception judgments and that motivation can impair a liar's ability to achieve communication goals. Copyright Sage Publications, Inc. Mar 2000 Full Text: This article reports three studies of international deception. Americans, Jordanians, and Indians were videotaped while lying and telling the truth, and the resulting tapes were judged for deception by other Americans, Jordanians, and Indians. Results show that lies can be detected across cultures. They can be detected across cultures that share a language and cultures that do not, by illiterates as well as university students. Contrary to a hypothesis of ethnocentrism, perceivers show no general tendency to judge persons from other countries as deceptive; in fact, they often judge foreigners to be more truthful than compatriots. There is, however, some evidence for a language-based ethnocentrism when perceivers are judging the deceptiveness of a series of people from the same multilingual culture. Ancillary results reveal that people from diverse backgrounds reach consensus in deception judgments and that motivation can impair a liar's ability to achieve communication goals. Preception has been defined as an "act that is intended to foster in another person a belief or understanding which the deceiver considers false" (Zuckerman, DePaulo, & Rosenthal, 1981). judgments of deception have important consequences. Some may even start wars (Triandis, 1994). Psychologists have studied judgments of deception from behavior. They have identified some nonverbal cues to deceit (Ekman, 1992) and discovered a number of patterns in naive observers' attempts to spot lies (Zuckerman et al., 1981). Unfortunately, most of the research on deception has been restricted to the United States. Although a little has been learned about deception in other countries (Aune & Waters, 1994; Cody, Lee, & Chao, 1989; Feldman, 1979), there has been only one experimental study of international deception to date. Bond, Omar, Mahmoud, and Bonser (1990) investigated international deceptions between Jordanians and Americans. Jordanian and American students were videotaped while telling lies and truths; later, other Jordanian and American students watched the tapes and tried to spot deceit. Results showed statistically significant lie detection within each of the two cultures but no lie detection between cultures. In implying that lies cannot be detected across cultures, the Bond et al. (1990) results would seem to have important implications. Theoretically, these results suggest that the ability to detect deception reflects culturespecific learning. At a practical level, they suggest that in international settings, liars are rarely caught. It would be premature, however, to infer from a single study that cross-cultural lie detection is impossible. Even within a culture, lies are difficult to detect. In the American research literature, rates of monocultural lie detection rarely exceed 55%, when guessing would produce 50% detections (Kraut, 1980). Thus, one could hardly expect that lies would be easy to detect across cultures or that international lie detection abilities would be easy to observe. The Bond et al. (1990) study may have failed to uncover international lie detection for several reasons: They required people to judge deception solely from visible cues, restricted their research to students, and gave these people no motivation to lie. In principle, these factors might influence international deception judgments, as will now be explained. Often, international lies are encountered in face-toface meetings, where the liar can be seen and heard. However, in the only existing study of international deception (Bond et al., 1990), people were forced to judge deception from a video presentation with no sound. This unnatural perceptual mode may have undermined judges' attempts at lie detection. Consistent with this analysis, previous research shows that Americans are more accurate at detecting Americans' lies if they can hear the liars in addition to seeing them (Zuckerman et al., 1981). International deceptions can involve individuals from diverse backgrounds. Participants in deception research are, by contrast, homogeneous. Almost invariably, they are students. Bond et al. (1990), for example, had university students in one country lie to university students in another country. Perhaps attempts at lie detection depend on the similarity of the liar to the target of deception. Perhaps research-based conclusions about deception judgments reflect university students' youth, wealth, and higher education (cf. Sears, 1986). In light of these possibilities, no general conclusions about international deception should be drawn from university students' attempts to detect university students' lies. The stakes in an international deception can be high (Ekman, 1992). In most research on deception, the stakes are by contrast low. In the Bond et al. (1990) study of international deception, for example, participants were motivated solely by their desire to fulfill a psychology course requirement. Perhaps people regard experimental deception tasks as a form of acting, devoid of any consequences. Perhaps they feel none of the arousal that highly motivated liars experience and, hence, display none of the cues that would otherwise give them away. Consistent with this line of reasoning, Americans are most likely to "leak" nonverbal cues to deception when highly motivated to conceal their lies (DePaulo & Kirkendol, 1989). The current article reports three studies of international deception. In Experiment 1, American and Jordanian university students attempt to detect international lies from an audiovisual presentation; in Experiment 2, illiterate Indian farm workers attempt to detect American and Jordanian university students' lies; and in Experiment 3,judges from three countries seek to detect Indians' motivated lies. These studies seek to determine whether lies can be detected across cultures. Judgments of deception need not be accurate to have important effects. Indeed, Triandis (1994) maintains that the Persian Gulf War resulted when an Iraqi official mistakenly concluded that an American negotiator was lying. Thus, the present experiments will consider not only the accuracy of international lie detection but also international biases in judging deceit. Previous research shows that Americans show a bias toward perceiving other Americans as truthful (DePaulo, Stone, & Lassiter, 1985). In considering the sorts of biases that might color international deception judgments, we considered two possibilities. Perhaps people give the benefit of the doubt to communicators they do not understand. If so, people might be reluctant to judge foreigners as deceptive. Perhaps, on the other hand, people are suspicious of outsiders. If so, ethnocentric stereotypes (Smith & Bond, 1994) might encourage them to judge foreigners as dishonest. In three experiments, we will assess these possibilities by examining American, Jordanian, and Indian tendencies to attribute deception to foreigners and compatriots. EXPERIMENT 1: VISIBLE AND AUDIBLE LIES In principle, international subterfuge might be uncovered from a variety of cues. Eye contact, smiles, and head nodding can be seen; speech rate, volume, and tone of voice can be heard. Although researchers have detailed the impact of many such cues on Americans' judgments of Americans' veracity (cf. Zuckerman et al., 1981), in the international arena, some complications emerge. Negotiators who are ignorant of an adversary's language may attach special significance to nonverbal cues. Yet, these negotiators must be cognizant of crosscultural differences, lest they interpret foreign mannerisms as evidence of deception (cf. Bond et al., 1992). An initial study was conducted to analyze the impact on international lie detection of visible and audible cues. American and Jordanian university students attempted to detect one another's lies from one of three presentations: a video presentation of the liar's face and body, an audio presentation of the liar's speech, or an audiovisual presentation of both visible and audible cues. Americans have difficulty detecting other Americans' lies when they must base their judgments solely on what they can see. They are better at detecting lies if the liar also can be heard (DePaulo et al., 1985). Experiment 1 will determine whether similar effects obtain in American and Jordanian university students' attempts at international lie detection. Ordinarily, liars attempt to conceal their deceptions, hoping to gain an advantage over an adversary (Bond, Kahler, & Paolicelli, 1985). In this respect, liars differ from other communicators, who seek to transmit information faithfully. In deference to the adversarial nature of deceptive interactions, most of the liars in the present research were instructed to conceal their deceit. To allow for a comparison with attempts at faithful information transmission, a few liars were given a different communication goal: to convey to others the fact that they were lying. Earlier research indicates that Americans have some ability to convey deception to other Americans (Zuckerman, DeFrank, Hall, Larrance, & Rosenthal, 1979). We wondered whether cross-cultural attempts at conveying deception also might be efficacious. METHOD Research Participants American and Jordanian university students participated in Experiment 1 by judging deception. The Americans were 89 female and 31 male psychology students at Texas Christian University. The Jordanians were 30 female and 30 male psychology students at Yarmouk University in Jordan. Videotapes The participants judged videotapes that had been made by Bond et al. (1990). These depicted Americans and Jordanians lying and telling the truth. On the videotapes were 20 male and 20 female American psychology students from Texas Christian University as well as 20 male and 20 female Jordanian psychology students from Yarmouk University. Throughout the experimental procedure, the Americans spoke in English; the Jordanians spoke in Arabic. At the time the tape was made, a student sat facing a male research assistant from the student's culture and a videotape camera was located over the assistant's right shoulder. Students were then asked to describe a person they knew. They were asked to describe either (a) a person they liked, (b) a person they disliked, (c) a person they liked as if they really disliked that person, or (d) a person they disliked as if they really liked that person. Students were instructed either to tell the truth (if giving one of the first two descriptions above) or to lie (if giving one of the latter two descriptions). After giving the initial description, the student was asked for a second person description. Over the course of the videotaping session, each student gave all four of the person descriptions described above, with the order of the descriptions counterbalanced across students. For a similar procedure, see DePaulo and Rosenthal (1979). Students' communication goal was manipulated. On videotape, each student told the truth and lies. While telling the truth, all of the students were instructed to convince the research assistant that they were telling the truth. While lying, most of the students (32 Americans and 32 Jordanians) were instructed to conceal their lies and convince the research assistant that their descriptions were truthful. The other students (8 Americans and 8 Jordanians) were instructed to convey their lies and let the research assistant know that their descriptions were false. The latter were instructed not to say, in so many words, "I am lying" but were free to expose their deceit in any other way. The videotapes created a 2 (conceal vs. convey lie) x 2 (tell lie vs. tell truth) factorial experiment. Although these videotapes had been made for an earlier crosscultural study (Bond et al., 1990), none of the current judges had participated in the previous research. In all, 320 person descriptions were solicited (four from each of 80 students: 40 Americans and 40jordanians). These were edited onto four videotapes. Each videotape depicted one description from each of the 80 students. Forty of the descriptions were lies and 40 were truths. Of the 40 lies on each videotape, 32 were lies that the students had tried to conceal and 8 were lies that the student had tried to convey. To reduce the length of participants'judgment task, the videotape depicted only the first 30 seconds of each person description. Procedure Visually isolated from one another in groups of five, research participants were presented with a tape of people describing acquaintances. As each description was presented, the participant tried to determine whether it was the truth or a lie. Immediately after the description, participants indicated their binary lie-or-truth judgment on a written form. In response to one of the four videotapes described above, participants judged the veracity of 80 person descriptions. The tapes were presented in one of three modalities. One third of the participants judged deception from an audiovisual presentation of the tape, one third from an audio-only presentation, and one third from a video-only presentation. Each tape was judged in each modality by 10 Americans and 5 Jordanians. The segments on each tape were presented in one of two random orders. RESULTS Judges were asked about their language abilities. None of the 120 American judges claimed to know Arabic. Of the 60 Jordanian judges, 59 claimed to know English. Lie Detection Within and Between Cultures In the current study, each participant judged 40 lies and 40 truths. Half of the lies and half of the truths had been told by a member of the judge's culture; the other half had been told by a member of another culture. To test for lie detection, we noted the percentage of lie/truth judgments made by each of the 180 judges and compared the mean percentage correct to the 50% that would be expected by chance. We wondered whether it is possible to detect lies across cultures. It is. Overall, our participants' detection accuracy was 51.25% across cultures, t(174) = 2.06,p<.05, and 54.27% within cultures, t(174) = 7.03, p < .0001. An analysis of variance was conducted to identify factors that influence lie detection. This was a 3 (modality: audiovisual, audio only, or video only) x 2 (judge's culture: American vs. Jordanian) X 2 judgment status: of target from same culture or other culture) x 2 (liar's goal: conceal or convey lie) mixed-model ANOVA on the percentage of correct lie/truth judgments. Results revealed that judgments were more accurate within cultures than across cultures, F(1, 174) = 14.35, p<.001, that lies that targets had tried to convey could be more readily discriminated from truths than those that targets had tried to conceal, F(1, 174) = 24.43, p <.0001, and that lie detection was less accurate when attempted from a video-only rather than an audiovisual or an audio-only presentation; for the main effect of modality, F(2, 174) = 16.97, p < . 001. The ANOVA also showed that the liar's goal had its biggest effect on detection accuracy when lies were judged in the audiovisual rather than the audio or video mode, Modality x Goal interaction, F(2, 174) = 6.76, p < .005; that the liar's goal had a bigger effect on the accuracy of judges from the target's own culture rather than judges from the other culture, F(1, 174) = 4.69, p < .05; and that as targets, Americans were more successful than Jordanians at conveying deception. This final effect produced a two-way Judge's Culture X Judgment Status interaction, F(1, 174) = 7.67, p < .01, and a three-way judge's Culture X Judgment Status x Target's Goal interaction, F(1, 174) = 9.30, p < .01. No other effects in the ANOVA were statistically significant. Relevant means and t tests appear in Table 1. Across cultures, truths could be discriminated from lies that a target had attempted to convey when judgments were made from an audiovisual presentation. Within cultures, truths could usually be discriminated from lies, except when judgments were made from video only. Experiment 1 provides evidence for lie detection across a language as well as across cultures. American judges reported that they could not understand Arabic. However, from an audiovisual presentation, these judges could discriminate Jordanians' Arabic-language lies from truths (M correct= 53.30%), t(39) = 2.97, p < .01.1 Indeed, Americans' judgments of Jordanians were significantly more accurate when made from the audiovisual than from the video-only presentation (M correct for the latter = 49.31 %), for the difference, F(1, 174) = 4.66, p < .05. Access to Arabic speech facilitated lie detection by students who did not know Arabic. Judgmental Biases In principle, ethnocentrism might encourage people to stereotype foreigners as dishonest. Experiment 1 does not support this hypothesis. Americans gave foreigners the benefit of the doubt in judging more Jordanians than Americans to be truthful (Ms = 59.55% vs. 52.07%, respectively), F(1, 174) = 36.13, p < .0001. Jordanians judged as truthful just as many Americans as Jordanians; for the difference, F(1, 174) =.36, ns. In general, foreigners received the benefit of the doubt only if theycould be heard. Research participants judged as truthful 58.06% of foreigners and 53.24% of compatriots who had been depicted in the audiovisual presentation, 61.43% of foreigners and 49.14% of compatriots in the audio-only presentation, and 51.20% of foreigners and 52.29% of compatriots in the video-only presentation. In an ANOVA on percentage truth judgments, these patterns produced an interaction between judge's culture (American vs. Jordanian) and the status of the judgment (to own vs. other culture), F(1, 174) = 8.93, p < .005, as well as an interaction between presentation modality and the judgment's status, F(2, 174) = 14.60, p <.001. DISCUSSION Experiment 1 provides the first evidence to date of lie detection across cultures, Although it is not easy to detect lies across cultures, neither is cross-cultural lie detection impossible. International lie detection seems to require an audiovisual exposure to the liar, one that had not been available in earlier research. Experiment 1 suggests that Americans judge foreigners to be more truthful than fellow Americans. Hearing someone speak in an unfamiliar language may encourage judges to acknowledge their ignorance of the speaker's culture. Then judges give a speaker the benefit of the doubt. These Jordanians, knowing English, showed no such tendency in judging American speakers. Unfortunately, Experiment 1 has limitations. All of the participants of the experiment were students of higher education, and they were judging other students of higher education. Worldwide, university mores may encourage tolerance of cultural differences. If so, students may be uniquely inclined to give foreigners the benefit of the doubt. Worldwide, university students are similar in age, economic status, and educational background. Although Experiment 1 provides evidence of lie detection within an elite, geographically dispersed "culture" of higher education, it need not imply that individuals from radically different backgrounds could detect one another's lies. EXPERIMENT 2: THE "CULTURE" OF HIGHER EDUCATION We designed a second study to provide a more stringent test for international lie detection. In Experiment 2, videotapes of American and Jordanian university students were judged by Indians. Some of these Indian judges were university students, whereas others were illiterate farm workers. If international lie detection is confined to an elite culture of higher education, illiterate Indian farm workers should show no ability to detect American and Jordanian university students' lies. If only the highly educated give foreigners the benefit of the doubt, illiterates should be more willing than university students to judge foreigners as deceptive. [IMAGE TABLE] Captioned as: TABLE 1: While investigating international lie detection, Experiment 2 also provides an assessment of the nonverbal abilities of illiterates. Illiterates comprise roughly a third of the world's adult population (Tresserras, Canela, Alvarez, Sentis, & Salleras, 1992) but are rarely studied by social psychologists Jahoda, 1979). Illiterates are of theoretical interest because they can illuminate the impact of schooling on various competencies. In principle, one might expect illiterates to have exceptional judgmental abilities. Perhaps illiterates are uniquely equipped to interpret nonverbal behavior in possessing a holistic style of reasoning that education would undermine (Rogoff, 1980). Moreover, illiterates have a special investment in face-to-face behavior because they rely exclusive on nonwritten communications. Experiment 2 will test these possibilities. METHOD Research Participants The participants were 120 residents of Maharashtra, a state in western India. Sixty of the participants (29 females, 31 males) were English-speaking psychology students at the University of Pune in India. The other 60 participants (15 females, 45 males) were farm workers from Bakori village, an isolated agricultural community 30 miles from Pune. The villagers spoke the Indian language Marathi. As farm laborers, these villagers earned 75 cents a day. Although the villagers had seen Indian television, few had ever met a non-Indian. Procedure Visually isolated in groups of five, the participants were presented with a videotape of Americans and Jordanians describing acquaintances. As each videotape segment was presented, participants indicated whether the person on the tape was lying or telling the truth. University students, who were seated in a classroom, indicated their binary lie-or-truth judgments in writing on a response form. Farm workers, who were seated on the floor of the village Hindu temple, indicated their judgments nonverbally by turning a thumb up if they thought that the person on the videotape was telling the truth or a thumb down if they thought that the person was lying. In response to one of the four videotapes described in Experiment 1 above, each participant judged the veracity of 80 students (40 Americans and 40 Jordanians). As before, the tapes were presented in one of three modalities: audiovisual, audio only, or video only. The segments on each tape were presented in one of two random orders. Each videotape was judged in each modality by 5 Indian students and 5 Indian farm workers. RESULTS All 60 of the Indian students had completed bachelor's degrees and were enrolled in a master's program. Each of these students knew English. None of the 60 farm workers had any higher education, and 26 acknowledged that they were illiterate. Although the other 34 farm workers claimed to be literate, many had difficulty signing their names. Nine of the farm workers claimed to know English but none could understand the American English spoken by the first author of the current article. None of these 120 Indian judges claimed to know Arabic. Lie Detection by Students and Illiterates Experiment 1 indicates that it is possible for Americans and Jordanians to detect lies across cultures. As a second test for international lie detection, we had each of 120 Indians judge 40 lies and 40 truths told by nonIndians. Again, there was evidence of cross-cultural lie detection. The Indian participants averaged 51.08% correct lie/truth judgments, which is more than the 50% expected by chance, t(114) = 2.27, p < .025. Perhaps lies can be detected internationally only by judges who are similar to the liars. Experiment 2 does not support this hypothesis. In fact, illiterate Indian farm workers were just as successful as Indian university students at detecting American and Jordanian university students' lies. In a 3 (presentation modality) X 2 (liar's goal: conceal vs. convey lie) x 2 judge's subculture: student vs. farm worker) ANOVA on percentage correct lie/truth judgments, the judges' subculture had no significant effects--for the main effect, F(1, 114) =.01. In Experiment 1, lie detection depended on the modality in which lies were presented, with international lie detection evident only from an audiovisual presentation. Those results were replicated here. These Indians could detect lies across cultures only if they were judging an audiovisual presentation (M correct lie/truth judgments = 51.90%), t(39) = 2.63, p <.025. They could not detect non-Indian lies from either audio only or video only Ms = 50.63% and 50.71 % correct lie/truth judgments, respectively), t(39) = .73 and ..83, both ns. In the analysis of variance, these differences produced a main effect of modality, F(2,114) =3.53,p<.05,but no interactions involving modality. Experiment 1 suggested that people can convey their lies across cultures but that they need not fear crossculture exposure of lies that they wish to conceal. Experiment 2 yields different results. Here, detection accuracy was just as strong for concealed lies as for conveyed lies; for the relevant main effect in the ANOVA on percentage correct judgments, F(1, 114) = 1.84, ns. Indeed, Indian judges could discriminate from truths lies that non-Indians intended to conceal (M correct lie/truth judgments = 51.35%), t(114) = 2.49, p<.025, but not lies that non-Indians intended to convey (M = 50.01%), t(114) = .01, ns. Judgmental Biases of Students and Illiterates In judging deception, American students give Jordanians the benefit of the doubt, as Experiment 1 shows. We had imagined that Indian university students would show this same judgmental bias and wondered whether illiterates might show an opposite bias-toward ethnocentric suspicion of outsiders. In fact, Indian university students showed some tendency to perceive Americans andjordanians as truthful, but illiterates gave these foreigners a stronger benefit of the doubt. Although 53.21% of the non-Indians were judged as truthful by Indian university students, 62.72% were judged as truthfal by Indian illiterates; for the difference, F(1, 114) = 35.58, p < .0001. These biases were unaffected by the target's culture and by the modality in which the foreigners were perceived; each F yields p > 10. DISCUSSION Individuals from starkly different backgrounds are able to detect one another's lies. International lie detection is not confined to an elite culture of higher education or to falsehoods that people wish to convey. The findings of Experiment 2 suggest that illiterates are neither more nor less successful at international lie detection than university students but that illiterates are more inclined to give foreigners the benefit of the doubt. Although we might be tempted to draw generalizations from these first two experiments, some limitations of the research remain to be addressed. As a test of the impact of communication goals on international lie detection, the first two experiments have limitations. In these studies, judges encountered only one fourth as many conveyed as concealed lies, and targets had no special incentive. Human abilities to convey lies would be more fairly assessed in a balanced research design, and the research would be more relevant to high-stakes deceptions if participants were given more motivation to lie. As a study of illiterates' deception judgments, Experiment 2 has limitations. This experiment suggests that illiterates are more willing than students to give foreigners the benefit of the doubt, but it does not indicate the scope of this bias. Relative to students, illiterates also might be more trusting of compatriots. Although Experiment 2 indicates that illiterates are no better than students at detecting foreigners' lies, it need not imply that the two sets of judges have equal judgmental abilities. If in their daily interactions with compatriots illiterates had acquired special skills, these need not have helped them judge foreigners. Together, the first two experiments suggest that language differences have little impact on lie detection. Thus, Americans (who do not understand Arabic) can detect Arabic-language lies and Indian farm workers (who understand only Marathi) can detect lies told in Arabic and English. Yet, these experiments confound language with culture. A clearer study of language differences could be conducted within a multilingual country. There, one could compare a perceiver's judgments of two sorts of compatriots-those who are lying in a language that the perceiver understands and those who are lying in another language. EXPERIMENT 3: LANGUAGE, SUBCULTURE, AND MOTIVATION To complement the first two experiments, we designed a third study. Having investigated judgments of American and Jordanian liars, we videotaped Indians telling lies and truths. Some of the Indians on the videotape spoke in English; others spoke in the Indian language Marathi. Half of the Indians on the videotape attempted to conceal their deceptions and half attempted to convey that they were lying. Videotapes of these Indian lies and truths were then judged for veracity by American students, Jordanian students, Indian students who understood English, and Indian illiterates who did not understand English. This final study sought to clarify the impact of communication goals, schooling, and language differences on international deception. Although most of the lies told in everyday life may be relatively inconsequential (DePaulo, Kashy, Kirkendol, Wyer, & Epstein, 1996), psychologists are fascinated by deceptions that involve high stakes (Frank & Ekman, 1997). To accommodate psychological interest in motivated deceptions, we gave some of the participants of Experiment 3 financial incentives to lie. Motivation impairs Americans' ability to lie. DePaulo and Kirkendol (1989) report that Americans' lies can be more readily discriminated from truths if the Americans are highly motivated to conceal a lie than if they are relatively unmotivated. Experiment 3 will determine whether in attempts to conceal deception, Indians also suffer a motivational impairment effect. The experiment will extend earlier efforts by studying the effect of motivation on Indians' attempts to convey that they are lying. METHOD Research Participants The participants were 120 American students from Texas Christian University (60 female, 60 male), 60jordanian students from Yarmouk University (30 female, 30 male), and 120 Indians (30 female and 30 male students from the University of Pune; 15 female and 45 male farm workers from Bakori village). Videotapes The participants judged videotapes of Indians lying and telling the truth. These Indian videotapes were modeled on the tapes of Americans and Jordanians described in Experiment 1 above (cf. DePaulo & Rosenthal, 1979). The videotapes depicted 32 female and 32 male residents of Pune, India, who had responded to a newspaper advertisement. While being videotaped, half of the Indians spoke in English and half spoke in the Indian language Marathi. At the time that the tape was made, an Indian sat facing an Indian male research assistant and a videotape camera located over the assistant's right shoulder. Indians were then videotaped while describing (a) a person they liked, (b) a person they disliked, (c) a person they liked as if they really disliked that person, and (d) a person they disliked as if they really liked that person. The order of the descriptions was counterbalanced across participants. While telling the truth, all of the participants were instructed to convince the research assistant that they were telling the truth. While lying, 16 of the English-speaking Indians and 16 of the Marathispeaking Indians were instructed to conceal their lies and convince the research assistant that their descriptions were truthful. The others were instructed to convey their lies and let the research assistant know that their descriptions were false without saying, in so many words, "I am lying." The Indians' motivation to lie was experimentally manipulated. Half of the participants were given no incentive to achieve their communication goal; the other half received a financial incentive. The latter could make 20 Indian rupees (Rs. 20) for each of their four person descriptions. They made Rs. 20 each time that they convinced the research assistant that a truthful person description was, in fact, truthful. They also made Rs. 20 for each lie if they were successful in achieving their assigned goal-either to conceal or convey the lie. After observing each person description for which a financial incentive had been offered, the male research assistant (who was unaware of the order of the descriptions) guessed aloud whether the participant was lying or telling the truth. The Rs. 20 payment was then handed to the participant if the latter had succeeded in achieving their goal (of conveying the truth, conveying a lie, or concealing a lie). By giving four successful person descriptions, the participant could earn up to Rs. 80 in 30 minutes. At the time of the study, Rs. 80 was approximately I day's pay for an Indian university professor. In all, 256 Indian person descriptions were solicited. These were edited onto four videotapes. Each videotape depicted 64 descriptions: one from each of the 64 participants-four descriptions in each of the cells of a 2 (tell lie vs. tell truth) x 2 (conceal lie vs. convey lie) x 2 (English vs. Marathi language) x 2 (incentive vs. no incentive) factorial design. Videotapes depicted the first 45 seconds of each person description. Procedure Visually isolated in groups of five, the participants were presented with a videotape of 64 Indians describing acquaintances. As each videotape segment was presented, participants indicated whether the person on the tape was lying or telling the truth. All of the university students indicated their binary lie-or-truth judgments in writing on response forms. The Indian farm workers turned a thumb up if they thought that the person on the tape was telling the truth and a thumb down if they thought that the person was lying. In response to one of the four Indian videotapes, each participant judged the veracity of 64 Indians. The tapes were presented in one of three modalities: audiovisual, audio only, or video only. The segments on each tape were presented in one of two random orders. Each videotape was judged in each modality by 10 Americans, 5jordanians, and 10 Indians (5 university students and 5 farm workers). RESULTS AND DISCUSSION After judging videotapes of Indians speaking in Marathi and English, participants reported on their language abilities. All 120 Indian judges, but none of the American or Jordanian judges, reported that they knew Marathi. Knowledge of English was claimed by all of the American judges, all of the Jordanian judges, all 60 of the Indian university students, and 4 of the 60 Indian farm workers. Of the 60 farm workers, 26 stated that they were illiterate. Although the other 34 claimed to be literate in Marathi, many had difficulty signing their names. Lie Detection Within and Between Cultures Our first two experiments indicate that American and Jordanian lies can be detected across cultures. We wondered whether there might also be evidence for the cross-cultural detection of Indian lies. There was. Each judge in Experiment 3 was presented with a videotape of 32 Indians lying and 32 Indians telling the truth. An analysis revealed that non-Indians who judged these tapes averaged 51.38% correct lie/truth judgments, which is greater than the 50% that would be expected by chance, t(179) = 2.81, p < .005. American and Jordanian lies could be detected more accurately by members of the liar's culture than by members of other cultures. This result did not generalize to Indian lies. In fact, in detecting Indians' lies, Indians were no more accurate than non-Indians; for the main effect of judge's culture (Indian vs. non-Indian) in an ANOVA on percentage correct lie/truth judgments, F(1, 294) = .0002, ns. Non-Indian judges averaged 51.38% correct detections of Indians; Indian judges averaged 51.39%. The earlier experiments indicate that lie detection abilities depend on the modality in which lies are judged. In particular, American and Jordanian lies can be detected across cultures only if they are judged from an audiovisual presentation, whereas these lies can be detected within cultures from either an audiovisual or audio presentation. Here, modality has a main effect on judges' accuracy in detecting Indians' lies and truths, F(2, 294) = 6.14, p < .005, with statistically significant lie/truth discrimination evident from the audiovisual presentation (M correct = 53.14%), t(99) = 5.11, p < .0001, but not the audio-only or video-only presentations (Ms = 50.77% and 50.25% correct, each p > .20). This pattern was the same for Indian and non-Indian judges-for the Modality x judge Culture interaction, F(2, 294) =.40, ns. The earlier studies found evidence for lie detection across languages as well as cultures. To clarify the role of language, in the current study we had Indians lie in two different languages-English and Marathi. Language had no significant effects on the percentage of correct lie/truth judgments in a 2 (English vs. Marathi language) X 2 (conceal vs. convey lie) x 2 (incentive vs. no incentive) x 2 (Indian vs. non-Indian judge) x 3 (modality) ANOVA (p > .05 for the main effect of language and every interaction involving language). However, when making judgments from audio only, Indian farm workers were less successful than Indian university students in discriminating Indian English-language lies from Indian English-language truths (M= 47.17% vs. 52.16% correct judgments, respectively), F(1, 114) = 4.48, p < .05. This difference may reflect the farm workers' ignorance of English and help explain why Indians as a whole did no better than non-Indians at detecting Indian lies. The Liar's Goal and Motivation Americans' attempts at concealing deception are subject to a motivational impairment effect (DePaulo & Kirkendol, 1989). We wondered whether motivation would impair Indians' attempts to conceal deception and whether motivation also might have some effect on Indians' attempt to convey that they are lying. An ANOVA on percentage correct lie/truth judgments reveals that the impact of motivation depends on the judge's culture as well as the liar's goal: Goal x judge's Culture x Incentive interaction, F(1, 294) = 3.97, p < .05. Means relevant to this interaction appear in Table 2. Follow-up analyses revealed two motivational impairment effects. In particular, Indian judges could discriminate from truths lies that Indians had received an incentive to conceal (M accuracy = 52.67%), t(119) = 2.27, p < .05, as well as lies that Indians had received no incentive to convey (52.70% accuracy), t(119) = 2.42, p<.05. They were not able to discriminate from truths lies that Indians had received no incentive to conceal or lies that Indians had received an incentive to convey (accuracy rates = 50.64% and 49.56%, respectively, ns). This produced a simple Liar's Goal x Incentive interaction on the percentage of lies and truths correctly judged by Indians, F(1, 294) = 3.96, p < .05. Thus, in judgments made by compatriots, motivation impaired Indians' ability to conceal and convey lies. Non-Indians' judgments were unaffected by the Indians' motivation to lie. Non-Indian judges were, however, more accurate in discriminating from truths lies that Indians intended to convey than ones they intended to conceal (M accuracy rates = 52.34% vs. 50.43%, respectively), F(1, 294) = 5.36, p < .05. Judgmental Biases Our studies of American and Jordanian deceptions suggest that people give foreigners the benefit of the doubt. They also indicate that Indian illiterates are more likely than Indian university students tojudge foreigners as truthful. To determine whether similar results would be evident in judgments of Indians' deceptiveness, we conducted a 2 (English vs. Marathi language) x 3 (judge sample: Indian illiterate vs. Indian student vs. nonIndian student) x 3 (modality) ANOVA on percentage truth judgments. The ANOVA revealed no general tendency for non-Indians to give Indians the benefit of the doubt. Instead, it revealed a Language x Judge Sample x Modality interaction, F(4, 291) = 5.84, p < .001, with an effect of language on illiterates' tendency to attribute truthfulness to Indians whom they could hear. The farm workers attributed less truthfulness to Indians whom they heard speaking English than to Indians whom they heard speaking the farm workers' language Marathi (judging as truthful 52.55% vs. 66.42% of such individuals, respectively), F(1, 291) = 33.07, p <.0001. No language-based discrimination was evident in Indian students' judgments of truthfulness. Non-Indians were more likely to judge as truthful Indians whom they heard speaking in English than Indians whom they heard speaking in Marathi (judging 54.19% vs. 50.83% of such individuals as telling the truth), F(1, 219) = 5.84, p < .05. Predictably, the language in which a person description was offered had no significant effect on judgments made from video only (all such effects yield p > .20). These results may reflect a language-based ethnocentrism. Both Indian farm workers (who knew Marathi but not English) as well as non-Indians (who knew English but not Marathi) placed more trust in Indians who spoke a familiar language than those who spoke an unfamiliar language. CUMULATIVE ANALYSES Having conducted three similar experiments, we now report a few cumulative analyses. One analysis provides an omnibus answer to a question of special significance. From an audiovisual presentation, can people detect lies that foreigners have attempted to conceal? Cumulative results show that they can. Across our three experiments, 160 judges attempted audiovisual detection of crossculturally concealed lies. In these cross-cultural judgments, they achieved a lie/truth discrimination accuracy rate of 51.66%, which is greater than the 50% that would be expected by chance, t(159) = 2.76, p <.01. Lies that a foreigner attempts to conceal cannot be uncovered from a video-only presentation (cross-cultural accuracy rate = 49.92%, ns) and may (or may not) be uncovered from an audio-only presentation (the latter accuracy rate 51.19%, two-tailed p = .054). Other cumulative analyses were designed to follow up on patterns established in earlier research. DePaulo and Rosenthal (1979) found that to Americans, some Americans appear honest even when they are lying and others appear dishonest even when they are telling the truth. Statistically, this "demeanor bias" reflects a positive correlation between the percentage of judges who infer deception from a given target's lie and the percentage of judges who infer deception from that same target's truth. Similar biases were evident in the judgments studied here. Here,judgments of a given target's lies were consistent with judgments of that same target's truths whether the judgments were made from an audiovisual, audioonly, or video-only presentation and whether the people making the judgments were from the target's own culture or from other cultures. For the relationship between percentage deception judgments to a given target's lies and to that same target's truths, partial rs that control for the target person's culture were +.25, +.39, and +.37 for audiovisual, audio-only, and video-only judgments made by people from the target's own culture and +.54,+.29, and +.54 for audiovisual, audio-only, and video-only judgments made by people from other cultures (for each of these rs, p <.001). To foreigners as well as compatriots, some people look honest even when they are lying, whereas others appear dishonest even when they are telling the truth. Bond et al. (1985) found that Americans reach substantial levels of consensus in judging Americans' deceptiveness. Analyses of the current data reveal that there is cross-cultural consensus in deception judgments as well. Across the three experiments reported here, participants judged lies and truths told by 144 target persons (40 Americans, 40jordanians, and 64 Indians). To assess agreement among judges from different cultures, we noted for a given target the percentage of American judges, the percentage of Jordanian judges, and the percentage of Indianjudges who inferred that the target as lying and intercorrelated these percentages across the 144 target persons within each of the three modalities. For results, see Table 3. As shown in Table 3, there is statistically significant cross-cultural consensus in judgments of deceptiveness whether the judgments are made from an audiovisual, audio-only, or video-only presentation. We also found evidence of judgmental consensus across languages. When judging deception from audiovisual presentations of Jordanians speaking in Arabic, farm workers agree with American students (r = .34, p < .05), although none of these judges understand Arabic. When judging deception from audio-only presentations of Indians speaking in Marathi, Jordanians agree with Americans (r = .39, p < .05), although none of these judges understand Marathi. To natives of diverse cultures, some people look and sound dishonest. GENERAL DISCUSSION These experiments provide the first evidence to date of lie detection across cultures. Lies can be detected across cultures that share a language and across cultures that do not. They can be detected by university students and by illiterates, too. [IMAGE TABLE] Captioned as: TABLE 2: [IMAGE TABLE] Captioned as: TABLE 3: Although Experiments 1 and 2 suggest that it is harder to detect a foreigner's than a compatriot's lie, it is noteworthy that the compatriot liars in the first two studies were the judges' classmates. As student peers, these judges and liars were more similar to one another than most compatriots (Sears, 1986) and, hence, might have been uniquely positioned to detect one another's lies. Experiment 3 was the only study in which participants judged lies told by nonpeer compatriots. Those lies were as well detected across cultures as within the culture. The current results have theoretical implications. They imply that there are cross-cultural similarities in the way that liars act and that behavioral concomitants of deception can be identified across cultures. Perhaps liars throughout the world have common experiences. They may fear exposure or have difficulty fabricating deceptions (Zuckerman et al., 1981). Perhaps in all cultures, liars' experiences give rise to the same behaviors, or to behaviors that convey the same impression. These function as pan-cultural detection cues. Visible cues are not sufficient for cross-cultural lie detection, and audible cues facilitate lie detection even when judges do not understand the liar's language. This explains why no cross-cultural lie detection was observed in an earlier study of deception judgments from videoonly displays (Bond et al., 1990). In suggesting that cross-cultural lie detection is based on vocal rather than verbal cues, the current results are reminiscent of earlier demonstrations that Americans can detect Americans' lies from content-filtered speech (Zuckerman et al., 1979). Although it is possible to detect lies across cultures, international lie detection is not easy. Similar to earlier monocultural research (Kraut, 1980), our studies of cross-cultural deception indicate that liars are often successful in their attempts to appear honest. This may imply that throughout evolutionary history, deception has been more important to the deceiver than to the target of deception (Bond et al., 1985) or that targets of deception rarely receive immediate feedback about their mistakes (DePaulo et al., 1985). Facial displays of emotion can be readily recognized across cultures, as a large research literature suggests (Ekman, 1980). At first blush, people's strong crosscultural ability to recognize emotions may seem at odds with their weaker cross-cultural ability to detect lies. However, there is an underlying consistency. In the earlier literature, the facial displays that were recognized across cultures were poses of emotions that were not being felt (Russell, 1994). To us, the earlier literature indicates that people are adept at feigning unfelt emotions. The present findings reveal that they also are adept at feigning attraction for acquaintances they dislike. Distinct from the ability to detect deception are biases in international judgments. Often, people regard foreigners with suspicion and mistrust (Smith & Bond, 1994). Imagining that these ethnocentric stereotypes would impel people to judge foreigners as deceptive, we were surprised by the current results. People perceive foreigners as more truthful than compatriots, especially when the target can be heard. They perceive Indians who are speaking in an unfamiliar language as more deceptive than those who are speaking in a language that the perceiver knows. In our view, these judgmental biases stem from a common source-the judge's attribution for a communication failure. Attempts at cross-language communication can be frustrating (Ryan & Giles, 1982). When people fail to understand a foreigner, they search for an explanation (Smith & Bond, 1994). Sometimes, listeners attribute communication failures to their own ignorance, giving speakers the benefit of the doubt. Sometimes, they attribute the failure to the speaker, thereby externalizing blame. From our first two experiments, we infer that listeners attribute communication failure to their own ignorance when confronting a culture in which no one is speaking the listener's language. This tendency toward self-blame is strongest among the poorly educated. From Experiment 3, we infer that listeners externalize blame for communication failures when they confront a culture in which some people are speaking a familiar language and others an unfamiliar language. Then the latter can be seen as choosing to miscommunicate. Future research will be needed to test our interpretations, to survey lies from other cultures, and to understand cross-cultural deceptive interactions (Buller & Burgoon, 1996). In the meantime, some conclusions can be drawn. It is possible to detect lies across cultures. Language and cultural differences introduce biases into deception judgments. These biases can have international consequences. NOTE 1. Throughout this article, one-sample t tests are used to assess the difference between lie/truth discrimination rates and 50%, the rate that would be expected by chance. Each t test reported in the article includes in its error term data from only those research participants whose lie/truth discrimination is being assessed. For each such test, a corresponding test was conducted in which the denominator of the t statistic was based on the pooled within-group error term from an appropriate analysis of variance. Each time a discrimination rate differs significantly from 50% at p < .05 by the individual-error t test reported in the article, it also differs significantly from 50% at p<.05 by the corresponding pooled-error t test. REFERENCES Anne, R. K., & Waters, L. L. (1994). Cultural differences in deception: Motivations to deceive in Samoans and North Americans. Internationaljournal of Intercultural Relations, 18, 159-172. Bond, C. Rjr., Kahler, K N., & Paolicelli, L. M. (1985). The miscommunication of deception: An adaptive perspective. Journal of Experimental Social Psychology, 21, 331-345. Bond, C. F., Jr., Omar, A., Mahmoud, A., & Bonser, R. N. (1990). Lie detection across cultures. Journal of nonverbal Behavior, 14,189-204. Bond, C. E Jr., Omar, A., Pitre, U., Lashley, B. R., Skaggs, L. M., & Kirk, C. T. (1992). Fishy-looking liars: Deception judgment from expectancy violation. Journal of Personality and Social Psychology, 63, 969-977. Buller, D. B., & Burgoon,J. K (1996). Interpersonal deception theory. Communication Theory, 6, 203-242. Cody, M. J., Lee, W. S., & Chao, E. Y (1989). Telling lies: Correlates of deception among Chinese. In J. P. Forgas & M. J. Innes (Eds.), Recent advances in social psychology: Proceedings of the 24th International Congress of Psychology (Volume 1). Amsterdam: North-Holland. DePaulo, B. M., Kashy, D. A., Kirkendol, S. E., Wyer, M. W., & Epstein, J. A. (1996). Lying in everyday life. Journal of Personality and Social Psychology, 70, 979-995. DePaulo, B. M., & Kirkendol, S. E. (1989). The motivational impairment effect in the communication of deception. In J. C. Yuille (Ed.), Credibility assessment (pp. 51-70). Belgium: Kluwer Academic. DePaulo, B. M., & Rosenthal, R. (1979). Telling lies: Deceiving and detecting deceit. Journal of Personality and Social Psychology, 37, 1713-1722. DePaulo,B.M.,Stone,J. I.,& Lassiter, G.D. (1985). Deceiving and detecting deceit. In B. R. Schlenker (Ed.), The selfand social life (pp. 323370). New York: McGraw-Hill. Ekman, P. (1980). The face of man: Expressions of universal emotions in a New Guinea village. New York: Garland STPM Press. Ekman, P. (1992). Telling lies: Clues to deceit in the marketplace, politics, and marriage. New York: Norton. Feldman, R. S. (1979). Nonverbal disclosure of deception in urban Koreans. Journal of Cross-Cultural Psychology, 10, 73-83. Frank, M. G., & Ekman, P. (1997). The ability to detect deceit generalizes across different types of high-stakes lies. Journal of Personality and Social Psychology, 72, 1429-1439. Jahoda, G. (1979). A cross-cultural perspective on experimental social psychology. Personality and Social Psychology Bulletin, 5, 142-148. Kraut, R. (1980). Humans as lie detectors: Some second thoughts. Journal of Communication, 30, 209-216. Rogoff, B. (1980). Schooling and the development of cognitive skills. In H. C. Triandis & A. Heron (Eds.), Handbook of cross-cultural psychology (Volume 4, pp. 233-294). Boston: Allyn & Bacon. Russell,J. A. (1994). Is there universal recognition of emotion from facial expression? A review of the cross-cultural studies. Psychological Bulletin, 115, 102-141. Ryan, E. B.,& Giles, H. (1982). Attitudes towards language variation.London: Edward Arnold. Sears, D. O. (1986). College sophomores in the laboratory: Influences of a narrow data base on social psychology's view of human nature. Journal of Personality and Social Psychology, 51, 515-530. Smith, P. B., & Bond, M. H. (1994). Socialpsychology across cultures: Analysis and perspectives. Boston: Allyn & Bacon. Tresserras, R., Canela, J., Alvarez, J., Sentis, J., & Salleras, L. (1992). Infant mortality, per capita income, and adult illiteracy: An ecological approach. American Journal of Public Health, 82, 435-438. Triandis, H. C. (1994). Culture and social behavior. In W.J. Lonner & R. Malpass (Ed.), Psychology and culture (pp. 169-173). Boston: Allyn & Bacon. Zuckerman, M., DeFrank, R. S., Hall, J. A., Larrance, D. T, & Rosenthal,R. (1979). Facial and vocal cues of deception and honesty. Journal of Experimental Social Psychology, 15, 378-396. Zuckerman, M., DePaulo, B., & Rosenthal, R. (1981). Verbal and nonverbal communication of deception. In L. Berkowitz (Ed.), Advances in experimental social psychology (Vol. 14, pp. 1-59). New York: Academic Press. Charles E Bond, Jr. Texas Christian University, U.S.A. Adnan Omar Atoum Yarmouk University, Jordan Authors' Note: We are grateful to Swati Apte, Sharon Ekhardt, Darryl Johnson, Michael Myers, M. N. Palsane, and Urvashi Pitre for help with this research. The first author's contribution was supported by an Indo-American fellowship from the Indo-U.S. Subcommission on Education and Culture and by a paid leave of absence from Texas Christian University. Correspondence should be addressed to Charles F. Bond, Jr., Box 298920, Department of Psychology, Texas Christian University, Fort Worth, TX 76129, U.S.A.; e-mail: c.bond@tcu.edu. Reproduced with permission of the copyright owner. Further reproduction or distribution is prohibited without permission. The effects of suspicion on the recall of cues used to make veracity judgments Communication Reports Salt Lake City Winter 1998 -------------------------------------------------------------------------------- Authors:                  Murray G Millar Authors:                  Karen U Millar Volume:                   11 Issue:                    1 Pagination:               57-64 ISSN:                     08934215 Subject Terms:            Memory                           Psychology                           Communication                           Research Abstract: A study examined the effects of suspicion on the recall of cues used to make veracity judgments.  It was hypothesized that more noncontent cues would be attended to by suspicious individuals than nonsuspicious individualds. Copyright Western States Communication Association Winter 1998 Full Text: The present study investigated the effects of suspicion on the recall of cues used to make veracity judgments. It was hypothesized that more noncontent cues would be attended to by suspicious individuals than nonsuspicious individuals. One hundred thirteen participants viewed videotapes of persons describing their work history truthfully or deceitfully under high and low suspiciousness conditions. Following the videotapes, the participants were asked to indicate whether the applicant had held the job and produced a written explanation for their judgments. When these statements were content analyzed for types of deception cues, the results supported the hypothesis. MA substantial amount of research has attempted to identify the types of behavioral and verbal cues that are produced during deception. For example, deTurck and Miller (1990) identified six behavioral cues unique to deception-induced arousal: message duration, pauses, nonfluencies, response latencies, adaptors, and hand gestures. Additionally, deceivers tend to blink more (Riggio & Friedman, 1983), speak at a higher pitch (Ekman, Friesen, O'Sullivan, & Scherer, 1980), exhibit increased pupil dilation (O'Hair, Cody, & McLaughlin, 1981) and make more negative statements (Mehrabian, 1967). In two reviews of this literature, Zuckerman and his colleagues have identified behavioral and verbal cues that are reliably associated with deception (Zuckerman, DePaulo, & Rosenthal, 1981; Zuckerman & Driver, 1985). Attention has also been given to the behavioral and verbal cues that are actually utilized by people to detect deception (e.g., Bond, Omar, Pitre, Lashley, Skaggs & Kirk, 1992; deTurck, Harszlak, Bodhorn, & Texter, 1990; Stiff & Miller, 1986). There is reason to assume that the cues used by people to make veracity judgments are not the actual diagnostic cues associated with deception. That is, if detectors use diagnostic cues to detect deception, then we would expect them to be relatively accurate. However, studies that have examined deception detection accuracy have rarely found mean detection accuracies significantly above what would be expected from random guessing (DePaulo, Stone, & Lassiter, 1985; Kraut, 1978; Kraut & Poe, 1980; Zuckerman et al.,1981; Zuckerman & Driver,1985). Fiedler and Walka (1993) have proposed that most detectors rarely receive prompt feedback on the objective truth, which prevents any efficient learning. Further, when people are trained to use diagnostic cues, their detection accuracy improves. For example, when deTurck et al. (1990) trained detectors to use reliable deception cues, a significant increase in detection accuracy was reported. The studies that have examined cues utilized in veracity judgments indicated a reliance on the verbal content of the message (e.g., Fielder & Walka, 1993). Fiedler and Walka (1993) suggested that naive observers rely on heuristics (c.f. Kraut, 1978; McCornack & Parks, 1986; Stiff & Miller, 1986). Two commonly used cues are uniqueness of the event reported and falsifiability of the message. That is, if the event reported is unexpected or contains a large number of factual (vs. emotional) statements, the communicator is more likely to be perceived as lying. Further, McCornack and his colleagues found that other elements of the message (e.g., message clarity, relevance, quality, and quantity) influenced the perceived deceptiveness of the message (McCornack & Parks, 1986; McCornack & Levine, 1990; McCornack, Levine, Solowczuk, Torres, & Campbell, 1992). Specifically, McCornack et al. (1992) demonstrated that manipulations of quality (e.g., not providing highly sensitive information) were perceived as the "most deceptive" messages. However, it is possible that elements of the deception situation may cause people to attend to other types of cues (Ebesu & Miller, 1994; Millar & Millar, 1995; Stiff, Miller, Sleight, Mongeau, Garlick, & Rogan, 1989). One variable that may cause people to shift to other types of cues is situationally aroused suspicion. In low-suspicion situations, relying on verbal content cues may reflect the strong tendency for people to limit the amount of their cognitive effort when making judgments about others. That is, there is considerable evidence that people tend to be cognitive misers when making inferences about others (e.g., Gilbert, 1989; Gilbert, McNulty, Giuliano, & Benson, 1992; Langer, Blank, & Chanowitz, 1978; Quattrone, 1982). Consequently, people in low-suspicion situations may attend to the most available and easily retrieved information to make veracity judgments. Although availability of information does not necessarily imply that the person will utilize the information to make veracity judgments, there is considerable evidence suggesting that availability is related to utilization. For example, Fazio and his colleagues have developed an extensive line of research that demonstrated that attitudes available in memory are utilized to direct behavior (Fazio, 1989; Fazio, Chen, McDonel, & Chairman, 1982). In most spoken interactions, the goal is to understand and respond to the content of the message. Therefore, the content of the message will be more available and may be utilized to make veracity judgments (e.g., does the information make sense, is there sufficient detail). Alternatively, in moderately high suspicion situations, persons may be motivated to expend more cognitive effort. That is, a number of researchers have proposed that moderate levels of suspicion operate to increase the amount of cognitive effort used by people to make veracity judgments (Burgoon, Buller, Ebesu, & Rockwell, 1994; Stiff, Kim, & Ramesh, 1992; McCornack & Levine, 1990). Consistent with this proposal, Hilton and his colleagues have repeatedly found that suspicious observers were less likely to rely on heuristics when making judgments about other people and were more likely to engage in more deliberative atttributional thinking and the examination of subtle cues (Fein, Hilton, & Miller, 1990; Hilton, Fein, & Miller, 1993). Consequently, people in suspicious situations may be more likely to attend to some less-available aspects of the interaction. That is, suspicious people may also attend to noncontent cues (e.g., gaze aversion). In the present study, it was hypothesized that Hi: While making veracity judgments, people who are suspicious will use more noncontent cues than will people who are not suspicious. H2: While making veracity judgments, people who are suspicious will use less content cues than will people who are not suspicious. METHOD Participants The sample consisted of 113 participants (64 female and 49 male) who were recruited to participate in the study. Twenty-seven participants were recruited from a large urban community and 86 from the undergraduate population at a large southwestern university. Participants were recruited on a voluntary basis with class credit offered for undergraduates. The ages of the participants ranged from 17 years to 58 years with the average age of the sample being 26 years. The subjects participated in the experimental conditions in groups of three to four people and the sessions were conducted by male and female experimenters. Subjects were randomly assigned to each of the two experimental conditions. Materials Videotapes were constructed using 24 (14 female and 10 male) undergraduates, recruited from a large southwestern university, who participated on a voluntary basis. At the beginning of each session, participants were required to list in order their five most recent jobs. To enhance motivation to lie successfully, the communicators were advised that the researchers were interested in deceptive ability because the ability to lie successfully is related to a number of important skills and traits such as intelligence (see DePaulo, Kirkendol, Tang, & O'Brien, 1988). Then the communicators were informed that they would be videotaped twice: once while they were being truthful and once while they were being deceitful. In the truth session, the communicators were asked to state truthfully what their most recent job was, and to respond to a series of questions about this job. For example, applicants were asked: what was the best part of your last job?; what did you think of your boss?; describe your coworkers (see Bond, Kahler, & Paolicelli (1985) for a similar procedure). In the deceitful session, the communicators were instructed to lie about what their most recent job was (communicators were restricted to jobs that were plausible) and invent answers to the questions about their jobs. None of the participants, when asked to lie, described jobs held (according to their earlier statements) in the last five years. The order of the truthful and deceitful video sessions was randomized. After completing the videotaping sessions, the communicators were debriefed and released. Procedure Participants were recruited to a study investigating job interview impression management. They were informed that they would be viewing videotapes made during a job interview of a number of people who are describing their most recent jobs. Participants were asked to watch each videotape very closely because they would be required to make a number of judgments about the interviewee's performance. Each participant viewed a videotape that consisted of eight different communicators describing their most recent jobs. On each tape presented to the participants, one-half of the descriptions were truthful and one-half were deceitful. The order of the truthful and deceitful communications was randomized on each tape. Suspicion manipulation. Before beginning the videotape, the experimenter provided the participants with some background information about the eight interviews being presented. Then directions were given to manipulate suspicion. Participants who were randomly assigned to the high suspicion condition were told that some of the people being interviewed may not be completely truthful (see Burgoon et al., 1994 for similar manipulation). Participants assigned to the low suspicion condition were told nothing about the potential veracity of the interviewee. Measures. Immediately following the background information, participants were seated four to five feet from a television monitor that displayed the videotapes of the eight interviews. After each interview the tape was stopped and the participants were asked to evaluate the performance of the job applicant on a number of scales (see McCornack & Levine, 1990 for discussion of the importance of using filler items). For example, the participants were asked to indicate by checking "yes" or "no" whether the applicant had adequate social skills. After each yes/no response the participants were asked to indicate how confident they were about this judgment on a scale with endpoints of 0% to 100%. Embedded in these items was one question that asked the participants to indicate whether the applicant had held the job he/she was describing in the interview. In addition, space was provided for the participants to explain why they thought the applicant had or had not held the job. These written responses provided the material for the content analysis. When the participants had finished judging all eight candidates on the videotapes, the participants indicated how suspicious they would be of a person in the same situation as presented in the videos on a scale with endpoints of "1" meaning very suspicious and "9" meaning not suspicious and "9" meaning not suspiciozcs. The participants were the questioned concerning their understanding of the experimental hypotheses, debriefed, and released. Results Manipulation Check. To determine whether participants in the low- and high-suspicion conditions perceived the situation differently, responses to the suspiciousness questions were analyzed in a one-factor (high vs. low suspicion) analysis of variance (ANOVA). As expected, the participants in the high suspicion condition reported being significantly more suspicious (M = 3.94) than participants in the low suspicion condition (M = 6.42), FCl, 111) = 68.38,p < .001, eta2 = .38. Suspiciousness and Cue Use. Each of the statements in the participants' open-ended descriptions of why they thought the applicant had or had not held the job were coded as referring to a content cue or a noncontent cue. Content cues refer to specific aspects of the verbal content of the message; they focus on what was said by the communicator. For example, content cues include judgments about whether the information presented by the communicator was correct and whether the communicator able to provide details about the job. Noncontent cues refer to all other visual and vocal deception cues. For example, noncontent cues include judgments about whether the communicator spoke quickly, shifted postures, or hesitated before responding (see Stiff et al., 1989 for an examination of visual and vocal cues). Two research assistants who were unaware of the experimental hypothesis coded the statements independently. Inter-rater reliability was strong (r= .85). The content cues and the noncontent cues listed by each participant were summed across the eight interviews. These sums were analyzed in a 2 (high vs. low suspicion) x 2 (content vs. noncontent cues) ANOVA with repeated measures assumed on the last factor. The main effect for cue type was significant, F(1, 111) = 175.46, p < .001, eta2 = .39. In general, participants used more content cues than noncontent cues to make their veracity judgments. As expected, a significant interaction between cue type and suspicion emerged, F(1,111) = 4.65, p = .03, eta2 = .04. Simple effects analysis revealed that, consistent with HI, more noncontent cues were used when participants were high in suspicion (M = 2.89) than when they were low in suspicion (M = 2.48) F111) = 7.12,p = .01, eta2 = .06. Similarly, as H2 predicts, less content cues were used when participants were high in suspicion (M = 6.41) than when they were low in suspicion (M = 7.15), F(1, 111) = 6.43, p = .01, eta2 = .05. DISCUSSION The present research indicated that people attempting to assess veracity recall used significantly more content cues than noncontent cues. This finding is consistent with previous research demonstrating the preference for global heuristics and content-related cues by lie detectors ( Fiedler & Walka, 1993; McCornack & Parks, 1986; Stiff & Miller, 1986). The present research also indicated that suspicion influences the number of noncontent cues recalled. That is, participants that are suspicious recalled using more noncontent cues than did participants that were less suspicious. We suggest that this pattern occurred because individuals, when not suspicious, respond to what is being said in the message. When suspicious, however, participants exert more cognitive effort and also attend to noncontent cues. The results of the present study may provide a clue to the relationship between suspicion and detection accuracy in relational partners found by McCornack and Levine (1990). These researchers, as well as others, have found that in relationships, partners exhibit a "truth bias" that interferes with deception detection (Buller & Aune, 1987; McCornack & Parks, 1986). However, McCornack and Levine (1990) also demonstrated that moderate levels of suspicion are associated with increased accuracy. Perhaps, like the interview setting, suspicious partners begin to attend to more diagnostic noncontent cues in addition to the content of the message. Although the results are consistent with our expectations the present study does have limitations. Participants were asked to make self-reports on cues they used to detect deception. A large body of research has indicated that people often have difficulty describing their cognitive processes (Nisbett & Wilson, 1977). It may be that cues the participants recalled were not all the cues they actually used to make their veracity judgments. That is, it is possible that some deception cues may be used unconsciously or may simply be difficult to recall. However, the present study has demonstrated an enhanced ability to recall noncontent cues when participants are suspicious. The relationship between the cues that are recalled and the cues that are actually used is an issue for future research. REFERENCES Bond, C., Kahler, K., & Paolicelli, L. (1985). The miscommunication of deception: An adaptive perspective. Journal of Experimental Social Psychology, 21, 331-345. Bond, C. F.,Jr., Omar, A., Pitre, U., Lashley, B. R., Skaggs, L. M., & Kirk, C. T. (1992). Fishy-looking liars: Deception judgment from expectancy violation. Journal of Personality and Social Psychology, 63, 969-977. Buller, D. B., & Aune, R. K. (1987). Nonverbal cues to deception among intimates, friends, and strangers. Journal of Nonverbal Behavior, 11, 269-290. Burgoon, J. K., Buller, D. B., Ebesu, A. S., & Rockwell, P. (1994). Interpersonal deception: V. Accuracy in deception detection. Communication , Monographs, 61, 303-325. deTurck, M. A., Harszlak, J. J., Bodhorn, D. T., & Texter, L. A. (1990). The effects of training social perceivers to detect deception from behavioral cues. Communication Quarterly, 38, 1-11. deTurck, M. A. & Miller, G. R. (1990). Training observers to detect deception: Effects of self-monitoring and rehearsal. Human Communication Research, 16, 603-620. DePaulo, B. M., Kirkendol, S., Tang, J., & O'Brien, T. (1988). The motivational impairment effect in the communication of deception: Replications and extensions. Journal of Nonverbal Behavior, 12,177-202. DePaulo, B. M., Stone, J. I., & Lassiter, G. D. (1985). Deceiving and detecting deceit. In B. R. Schlenker (Ed.), The self and social life (pp. 323-370). New York: McGraw-Hill Ebesu, A. S., & Miller, M. D. (1994). Verbal and nonverbal behaviors as a function of deception type. Journal of Language and Social Psychology, 13, 418-442. Ekman, P., Friesen, W. V., O'Sullivan, M., & Scherer, K. R. (1980). Relative importance of face, body, and speech in judgments of personality and affect. Journal of Personality and Social Psychology, 38, 270277. Fazio, R. H. (1989). On the power and functionality of attitudes: The role of attitude accessibility. In A. Prantkanis, S. Bricklayer, & A. Greenwald (Eds.), Attitude structure and function (pp. 153-179). Hillsdale, NJ: Erlbaum. Fazio, R. H., Chen, J., McDonel, E., & Chairman, S. (1982). Attitude accessibility, attitudebehavior consistency, and the strength of the object-evaluation association. Journal of Experimental Social Psychology, 18, 339-353. Fein, S., Hilton J. L., & Miller, D. T. (1990). Suspicion of ulterior motivation and the correspondence bias. Journal of Personality and Social Psychology, 58, 753-764. Fiedler, K., & Walka, I. (1993). Training lie detectors to use nonverbal cues instead of global heuristics. Human Communication Research, 20, 199-223. Gilbert, D. T. (1989). Thinking lightly about others: Automatic components of the social inference process. In J. S. Uleman &J. A. Bargh (Eds.), llnintended thought (pp. 189-210). New York: Guilford. Gilbert, D. T., McNulty, S. E., Giuliano, T. A., & Benson, J. E. (1992). Blurry words and fuzzy deeds: The attribution of outcome behavior. Journal of Personality and Social Psychology, 62,18-25. Hilton, J. L., Fein, S., Sr Miller, D. T. (1993). Suspicion and dispositional inference. Personality and Social Psychology Bulletin, 19, 501-512. Kraut, R. (1978). Verbal and nonverbal cues in the perception of lying. Journal of Personality and Social Psychology, 36, 380391. Kraut, R., & Poe, D. (1980). Behavioral roots of person perception: The deception judgments of customs inspectors and laymen. Journal of Personality and Social Psychology, 39, 784-798. Langer, E., Blank, A., & Chanowitz, B. (1978). The mindlessness of ostensibly thoughtful action. Journal of Personality and Social Psychology, 36, 635642. Mehrabian, A. (1967). Orientation behaviors and nonverbal attitude communication. Journal of Communication, 17, 324-332. McCornack, S. A., Levine, T. R., Solowczuk, K., Torres, H. I., & Campbell, D. M. (1992). When the alteration of information is viewed as deception: An empirical test of information manipulation theory. Communication Monographs, 59,17-29. McCornack, S. A., & Levine, T. (1990). When lovers become leery: The relationship between suspicion and accuracy in detecting deception. Communication Monographs, 57, 219230. McCornack, S. A., & Parks, M. R. (1986). Deception detection and relationship development: The other side of trust. In M. L. McLaughlin (Ed.), Communication Yearbook 9 (pp. 337-389). Beverly Hills, CA: Sage. Millar, M. G., & Millar, K. U. (1995). Detection of deception in familiar and unfamiliar persons: Effects of information restriction. Journal of Nonverbal Behavior, 19, 6984. Nisbett, R. E., & Wilson, TD. (1977). Telling more than we can know: Verbal reports on mental The effects of cognitive capacity and suspicion on truth bias Communication Research Beverly Hills Oct 1997 -------------------------------------------------------------------------------- Authors:                  Murray G Millar Authors:                  Karen U Millar Volume:                   24 Issue:                    5 Pagination:               556-570 ISSN:                     00936502 Subject Terms:            Cognition & reasoning                           Communication                           Perceptions Abstract: Millar and Millar investigated the effects of cognitive capacity and suspicion on veracity judgments. Copyright Sage Publications, Inc. Oct 1997 Full Text: This study investigated the effects of cognitive capacity and suspicion on veracity judgments. It was hypothesized that under low suspicion conditions, truth bias would be more pronounced when participants had low cognitive capacity than when participants had high cognitive capacity. One hundred and seven participants viewed presentations of people either truthfully or deceptively describing a series of pictures. Prior to the presentations, a short description designed to increase suspicion was read to half the participants. Participants viewed half of the presentations while working on arithmetic problems (low capacity) and the other half while not working on arithmetic problems (high capacity). Following each presentation, the participants were required to evaluate the communicator's performance on a number of scales and indicate whether the communicator was actually describing the picture. The results partially supported the hypothesis. Most studies that have examined the ability of people to detect deception have found that the detection accuracy of naive observers rarely exceeds 60% and usually is near levels that would be expected by chance (DePaulo, Stone, & Lassiter, 1985; DePaulo, 1988; DePaulo & DePaulo, 1989; Ekman, 1985; Ekman & O'Sullivan, 1991; Kraut, 1980; Zuckerman, DePaulo, & Rosenthal, 1981; Zuckerman & Driver, 1985). A variety of reasons have been offered for the low detection accuracies reported in these studies. For example, low detection accuracy may have been caused by the absence of baseline information on the deceiver's truthful behavior (e.g., Brandt, Miller, & Hocking, 1982; O'Sullivan, Ekman, & Friesen, 1988) or the absence of probes by the detector (e.g., Buller, Strzyzewski, & Comstock, 1991; Stiff & Miller, 1986). However, correcting these problems has usually produced only modest gains in detection accuracy. Another reason people may have difficulty detecting deception is their tendency to assume that people are telling the truth in most interpersonal interactions (McCornack & Parks, 1986). That is, the assumption of truthfulness or truth bias may prevent people from searching for deception cues, thus reducing detection accuracy. A large number of studies have indicated that truth bias is a powerful and widespread effect occurring in interactions between familiar people (e.g., McCornack & Levine, 1990; McCornack & Parks, 1986; Millar & Millar, 1995; Stiff, Kim, & Ramesh, 1992) and nonfamiliar people (e.g., Buller, Strzyzewski, & Hunsaker, 1991; DePaulo et al., 1985; Kalbfleisch, 1992; Riggio, Tucker, Throckmorton, 1987). Two explanations have been offered for the widespread presence of truth bias. First, a functional explanation suggested that truth bias facilitates communication and maintains relationships. For example, Kraut and Higgins (1984) proposed that the assumption of truthfulness in a conversational partner is a fundamental part of most conversations, allowing participants to go beyond what literally is said (cf. Clark & Clark, 1977; Grice, 1975). Similarly, McCornack and Parks (1986) suggested that trust or a truth bias is an integral part of maintaining intimacy in close relationships. A second explanation proposes that truth bias may be the result of a cognitive heuristic or simple decision rule (Buller et al., 1991; Stiff et al., 1992). That is, when evaluating the veracity of communications, people use the simple decision rule or heuristic that most messages are truthful.l The proposal that people use this simple decision rule is consistent with a vast amount of evidence indicating that social perceivers employ a large variety of both general and idiosyncratic heuristics (Tversky & Kahneman, 1974, 1983). There is also evidence that people employ a number of heuristics (e.g., infrequency, falsifiability) when they are motivated to attempt to detect deception (Fiedler & Walka, 1993). Heuristic processing of information allows for the evaluation of complex stimuli with a limited amount of cognitive effort (Tversky & Kahneman, 1974, 1983). The heuristic proposed by Stiff et al. (1992) would limit the amount of cognitive effort used in making veracity judgments. That is, the simple decision rule that others are generally telling the truth (truth bias) requires far less cognitive effort than scrutinizing each message for deception cues. Limiting effort is important when making veracity judgments because people do not have enough cognitive capacity to make veracity judgments based on a careful scrutiny of each communication. Furthermore, there is considerable evidence that people tend to be cognitive misers when making inferences about other people (e.g., Gilbert,1989; Gilbert, McNulty, Giuliano, & Benson, 1992; Langer, Blank, & Chanowitz, 1978; Quattrone, 1982). Although interpreting truth bias as resulting from a cognitive heuristic is consistent with current social cognitive approaches to person perception, there have been few attempts to provide evidence for this conceptualization. The present research endeavored to provide support for a heuristic explanation of truth bias by manipulating the amount of cognitive capacity available to people making veracity judgments. If a truth bias heuristic operates to reduce cognitive demands, then the amount of cognitive capacity available to make an inference should be related to the use of the heuristic. That is, as people have less cognitive capacity, they should be more motivated to reduce processing demands through the use of heuristics (Chaiken, Liberman, & Eagly, 1989). Consequently, when people have less cognitive capacity available to make a veracity judgment, truth bias should become more pronounced. In addition to manipulating cognitive capacity, suspicion was manipulated in the current research. Suspicion was manipulated in an effort to link possible changes in veracity judgments to the use of the truth bias heuristic. Research on veracity judgments has repeatedly demonstrated that increases in suspicion lead to more perceptions of deception (e.g., McCornack & Levine, 1990; Stiff et al., 1992; Toris & DePaulo, 1985). It seems as if suspicion motivates people to initiate more mindful processing (Buller, Strzyzewski, & Comstock, 1991; Burgoon, Buller, Ebesu, & Rockwell, 1994; Hilton, Fein, & Miller, 1993). Stiff et al. (1992) have concluded that suspiciousness was enough to offset the use of the truth bias heuristic even in well-developed relationships. If suspicion motivates more mindful processing, then suspicion may limit heuristic processing when people have sufficient cognitive capacity. It was expected that when participants are not suspicious, reductions in cognitive capacity should cause more use of the truth bias heuristic. That is, as capacity is reduced, more heuristic processing would be employed to reduce cognitive demands. Consequently, participants low in suspicion should make more truth judgments when they have low cognitive capacity than when they have high cognitive capacity. Alternatively, when participants are suspicious, this effect should reverse. Suspicious participants who have sufficient cognitive capacity should use less of the truth bias heuristic. That is, suspicion initiates more mindful processing and less reliance on heuristics. However, when suspicious participants have less cognitive capacity, they may again need to rely on heuristic processing and adopt a lie bias (Levine & McCornack, 1991). Consequently, participants high in suspicion should make more truth judgments when they have high cognitive capacity than when they have low cognitive capacity. Overall, suspicion should be associated with fewer truth judgments as well as interact with cognitive capacity. In the present study, participants were required to view four communicators who were describing pictures truthfully and deceitfully Prior to the presentations, half of the participants read a short description of the situation that was designed to increase suspicion about the communicators. The other half of the participants did not read this description. Then the participants viewed two presentations while being distracted (low capacity) and two of the presentations with no distraction (high capacity). After viewing each presentation, the participants were required to evaluate the communicator's performance on a number of scales. Embedded in these items was one question that asked the participants to indicate whether the communicator was actually describing what was in the picture, that is, the participants were asked to judge the veracity of the communicator. Method Participants One hundred and seven (62 female and 45 male) participants recruited both from a large urban community and from undergraduates at a large southwestern university participated in the study. Participants were recruited using sign-up sheets placed on campus bulletin boards. Participants received no monetary compensation, but class credit was offered to undergraduates. The ages of the participants ranged from 17 years to 80 years of age, with an average age in the sample of 33 years. The subjects participated individually in experimental sessions that were conducted by male and female experimenters in standard classrooms. Participants were randomly assigned to each of the two between-subjects experimental conditions: low suspicion or high suspicion. Materials Three videotapes were constructed in which four communicators described two pictures in a truthful manner and two pictures in a deceitful manner. The order of the truthful and deceitful communications was randomized in each tape. The communicators were 22 (12 females and 10 males) undergraduates who were recruited from a large southwestern university. At the beginning of the session, to enhance motivation to lie successfully, the communicators were advised that the researchers were interested in deceptive ability because the ability to lie successfully is related to a number of important skills and traits, such as intelligence (see DePaulo, Kirkendol, Tang, & O'Brien, 1988). Then the communicators were presented with a series of pictures depicting pleasant scenes (e.g., a mother and son playing on a beach) and unpleasant scenes (e.g., a train accident). For half the trials, the communicater was instructed to be truthful, and for the other half of the trials, the communicator was instructed to be deceitful. The order of the different types of trials was randomized. If the communicator was instructed to be truthful, he or she was required to describe (a) what kind of overall feeling the picture created, (b) what was happening in the picture, and (c) the people or objects in the picture. If the communicator was instructed to lie, he or she was required to describe (a) the opposite feeling to what the picture created, (b) something that was not happening in the picture, and (c) people or objects that were not in the picture (see Ekman & O'Sullivan, 1991, for similar procedure). Procedure Participants were recruited to a study that was ostensibly investigating how situational factors influence the evaluation of public presentations. Participants were informed that they would be required to view people making presentations describing pictures. The participants were also informed that they would have to view the presentation while experiencing a variety of different situational factors. Suspicion manipulation. Before beginning the presentations, the experimenter provided the participants with some background information about the four communications. Participants who were randomly assigned to the high suspicion condition were told that some of the people being interviewed might be presenting false information (see Levine & McCornack, 1991; McCornack & Levine, 1990; Stiff et al., 1992; for similar manipulations). Participants assigned to the low suspicion condition were told nothing about the potential veracity of the communicator. Cognitive capacity. Immediately following the background information, participants began viewing the four presentations. On two of the four trials, an attempt was made to limit the amount of cognitive capacity available to the detectors while viewing the presentation. On these trials, the detector was required to solve simple arithmetic problems while viewing the communicator's description of the picture. Detectors were asked to solve as many problems as they could and were told that the math problems would be used in a later portion of the study. Problems were presented to the detector for the entire length of the communication. On the other two of the four trials, the detectors were not required to solve arithmetic problems (see Festinger & Maccoby, 1964; Osterhouse & Brock, 1970; Petty, Wells, & Brock, 1976; Przybyla & Byrne, 1984; for similar procedures). All six possible orders of the two high capacity and two low capacity trials were used. Measures. After each presentation, the detector was asked to evaluate the presenter's performance on a number of scales. For example, the detectors were asked to indicate by checking yes or no whether the communicator had spoken clearly or whether the communicator liked what he or she had been describing. After each yes or no response, the detectors were asked to indicate how confident they were about this judgment on a scale with endpoints of 0% to 100%. Embedded in these items was a question that asked the detectors to indicate whether the communicator had described what was actually in the picture, that is, the participants were asked indirectly to indicate whether they believed the person was lying. When the detectors had finished answering these questions, they were asked to write down as much of the communicator's description of the picture as they could recall. Also, participants were required to complete a thoughtlisting procedure to get an index of what they were thinking about while watching the videotape. In this procedure, detectors were provided with a form containing eight boxes and asked to write down the first thought they had while listening to the presentation in the first box, the second thought in the second box, and so on. After completing the listing of their thoughts, the detectors were asked to place a + beside the box if the thought was favorable toward the presentation, a- if the thought was negative toward the presentation, or a 0 if the thought was irrelevant or neutral toward the presentation (see Greenwald, 1968; Wu & Shaffer, 1987; for descriptions of this procedure). When the detectors had responded to each of the four presentations, they were asked to indicate their age and gender. Also, they were asked to indicate how suspicious they were of the communicators on a 9-point scale with endpoints of 1 = not suspicious and 9 = very suspicious. Finally, the participants were questioned concerning their understanding of the experimental hypotheses and debriefed. Results Manipulation check. To assess the effectiveness of the suspicion manipulation, the participants were asked to indicate how suspicious they were of the communicators on a 9-point scale. When scores from this scale were analyzed in a one-factor (high vs. low suspicion) analysis of variance (ANOVA), participants in the high suspicion conditions indicated more suspicion (M = 6.07, SD = 1.93) than participants in the low suspicion conditions, M = 1.96, SD = 1.18, F(1, 105) = 174.81, p < .001, eta^sup2^ = .62. To assess the effectiveness of the cognitive capacity manipulation, the number of problems solved was summed for the low capacity trials. In the low capacity conditions, the participants performed on average 8.52 problems. When the number of problems solved in the low capacity conditions was analyzed in a one factor (high vs. low suspicion) ANOVA, the level of suspicion did not produce any significant differences. Veracity judgments. A measure indicating the number of judgments of deception was constructed by summing the number of times each participant indicated that a communicator had not described what was in the picture, that is, the number of times he or she indicated that the communicator was being deceptive. Separate scores were created for the truthful communications with low and high capacity and for the deceitful communications with low and high capacity. These scores were analyzed in a 2 (high vs. low suspicion) x 2 (high vs. low capacity trial) x 2 (truthful vs. deceitful communication) ANOVA with repeated measures assumed on the last two factors. As expected, a significant Suspicion x Capacity interaction was obtained, F(1, 105) = 5.36, p = .02, eta^sup2^ = .05.2 When participants were less suspicious, they made fewer judgments of deception when they had low capacity than when they had high capacity, F (1, 52) = 5.14, p = .03, eta^sup2^ = .09. By contrast when participants were more suspicious, there was no significant difference between the two capacity groups, F = 1.00, (see Table 1). In addition to an interaction, a strong main effect for suspicion was found, F(1, 105) = 11.53, p = .001, eta^sup2 ^= .09. Participants made more judgments of deception in the high suspicion conditions than in the low suspicion conditions. Detection accuracy. For each participant, the number of times he or she correctly judged whether the applicant was being truthful or deceitful was analyzed in the standard ANOVA. As expected, a significant main effect for type of communication (truthful vs. deceptive) was obtained, F(1,105) = 328.43, p < .0001, eta^sup2^= .76. Truthful statements were more accurately detected (M = 1.70, SD = .55) than deceptive statements (M = .26, SD = ..52). In addition, a significant interaction between type of communication and level of suspicion was obtained, F(1, 105) = 9.07, p = .002, eta^sup2^ = .08. Participants were more accurate with deceptive communications when they were high in suspicion (M = .37, SD = .62) than when they were low in suspicion (M = .16, SD = .37), F(1, 105) = 4.43, p = 04, eta^sup2^ = ..05. Alternatively, participants were more accurate with truthful communications when they were low in suspicion (M = 1.85, SD = .41) than when they were high in suspicion (M = 1.55, SD = .63), F(1, 105) = 8.33,p = .005, eta^sup2^ = .07. Confidence. For each participant, his or her confidence ratings of the deceptive judgments were also analyzed in the standard ANOVA, and the only effect to reach significance was a main effect for capacity, F(1, 105) = 13.23, p < 001, eta^sup2^ = .11. When participants had low capacity, they were less confident of their judgments (M = 74.44, SD = 20.36) than when they had high capacity (M = 80.37, SD = 21.45). Also, the relationship between confidence in the judgment and accuracy of the judgment was examined. When the participants' overall confidence ratings were correlated to their overall accuracy ratings, no significant relationship was found (r = -.13, p = .16). When separate correlations between accuracy and confidence were calculated in each of the experimental conditions, the correlation to reach significance was in the high cognitive capacity and low suspicion condition (r = -.34, p = .01). Recall accuracy. The statements produced by the participants during the message recall procedure were scored for accuracy by two raters. Overall, there was a high degree of consistency between the raters (r = .91), and discrepancies were solved by discussion. An accuracy index was created by subtracting the number of incorrect statements (statements that did not reflect statements in the presentation) from the number of correct statements (statements that did reflect statements in the presentation). When this index was analyzed in the standard ANOVA, a main effect for capacity was obtained, F(1, 105) = 91.85, p < .001, eta^sup2^ = .46. As we expected, participants were better able to recall the messages when they had high capacity (M = 6.71, SD = 2.37) than when they had low capacity (M = 4.52, SD = 1.73). Also, a significant interaction between suspicion and capacity was obtained, F(1, 105) = 8.22, p = .005, eta^sup2^ = .07. Participants who had high capacity recalled the message more accurately when they were high in suspicion than when they were low in suspicion, F(1, 105) = 6.10, p = .01, eta^sup2^ = .05. When participants had low capacity, this effect disappeared, F < 1, (see Table 2). Cognitive responses. The participants' cognitive reactions to the presentations were assessed by requiring them to write down their responses to the presentation and rate these responses as positive, negative, or irrelevant. For each participant, the number of positive, negative, and irrelevant responses was analyzed in three separate standard ANOVAs. When the number of positive responses was analyzed, only a main effect for capacity was significant, F(1,100) = 32.71,p < .001, eta^sup26 = .24.3 High capacity was associated with more positive responses (M = 3.62, SD = 2.34) than low capacity (M = 2.22, SD = 2.03). When negative responses were analyzed, a main effect for capacity was again obtained, F(1, 100) = 10.39, p = .002, eta^sup2^ = .09. High capacity was associated with fewer negative responses (M = 2.25, SD = 1.94) than low capacity (M = 3.12, SD = 2.17). In addition, this analysis also produced a main effect for suspicion, F(1, 100) = 4.75, p = .03, = .04, witheta^sup2^ high suspicion producing more negative responses (M = 2.99, SD = 1.59) than low suspicion (M = 2.35, SD = 1.49). Finally, when responses that were irrelevant to the message were analyzed, no significant effects were obtained. Discussion [IMAGE TABLE] Captioned as: Table 1 [IMAGE TABLE] Captioned as: Table 2 The results provided partial support for the hypothesis. When detectors were in the low suspicion conditions, they made more judgments of truth with low capacity than with high capacity, that is, there was more truth bias when detectors had reduced cognitive capacity However, when detectors were in the high suspicion conditions, there was no significant difference between high and low capacity conditions. The pattern of results from the recall data provides support for our hypothesis that suspicion motivates people who have sufficient cognitive capacity to engage in mindful processing of the presentation. That is, participants who were made suspicious and who had high cognitive capacity recalled the message better than participants in the other conditions. Overall, these findings suggest that truth bias, at least in part, functions as a heuristic designed to decrease the amount of cognitive effort needed to make judgments about others' veracity. Although the results from the veracity and recall measures were consistent with the hypothesis, the failure to find a significant effect for cognitive capacity in the high suspicion conditions is puzzling. If the suspicion manipulation predisposed people to adopt a lie bias (Levine & McCornack, 1991), then it might be expected that reductions in cognitive capacity would cause more reliance on a lie bias. That is, lie bias might operate like truth bias to reduce cognitive demands. Perhaps, in the present study, the amount of suspicion created was not high enough to induce a lie bias. Recall that participants in the high suspicion conditions on average did not rate themselves as being extremely suspicious (M = 6.07). It may be that higher levels of suspicion would have produced a stronger tendency toward the lie bias, and people would have been more inclined to adopt the lie bias to deal with reduced cognitive capacity. Of course, under very high levels of suspicion, people may simply adopt lie bias regardless of cognitive load. Future research should examine the effects of higher suspicion levels. When the accuracy of the veracity judgments was examined, an expected main effect for the veracity of the communication was found. Overall, as we would expect if people have a truth bias, participants were more accurate in detecting truthful messages than deceptive messages. Consistent with past research, increases in suspicion were not associated with an overall increase in detection accuracy (e.g., Buller, Strzyzewski, & Comstock, 1991; McCornack & Levine, 1990; Stiff et al., 1992; Toris & DePaulo, 1985). Instead, suspicious participants were more accurate with deceptive communications, and nonsuspicious participants were more accurate with truthful communications. The examination of the detectors' confidence in their judgments indicated that overall, the participants' confidence in their veracity judgments was not related to the accuracy of their judgments. This is consistent with some findings in the literature (e.g., Vrij, 1994). However, when this relationship was examined separately in each condition, a negative relation between accuracy and confidence emerged in the low suspicion and high capacity condition. As participants were more confident in their veracity judgments, their judgments became less accurate. The tendency to overestimate detection ability has also been found in a number of studies (e.g., Brandt, Hocking, & Miller, 1980; deTurck, Harszlak, Bodhorn, & Texter, 1990). Finally, the analyses of the detectors' cognitive responses produced a predictable pattern of results. Detectors in the low capacity conditions produced fewer positive responses and more negative responses than participants in the high capacity conditions. Performing arithmetic problems while attempting to listen to a presentation is a less pleasant and satisfying experience than just listening to the presentation. Also, detectors who were more suspicious found the experience more negative than detectors who were less suspicious. Despite providing some support for the hypothesis, the present study has several limitations. First, the effect of the problem-solving manipulation on the detectors' cognitive capacity was not directly measured. Cognitive capacity was not directly measured because it is difficult to construct a measure that would not influence cognitive capacity or interact with the experimental procedures. Although cognitive capacity was not directly assessed, the results present a pattern of evidence consistent with the notion that the problemsolving procedure manipulated the amount of cognitive capacity. Not only did the detectors do more arithmetic problems in low capacity conditions but they also were less confident in their judgments and were less able to recall the presentation. Together, these measures provide a reasonably good indication that cognitive capacity was manipulated as intended. Second, the lie detection task used in the study was noninteractive, that is, detectors and presenters were unable to converse with one another. In many real life communication situations, there is an interaction between the sender and detector, where each person can influence the behavior of the other (see Buller & Burgoon's, in press, interpersonal deception theory for a discussion of the importance of interactive paradigms). A noninteractive procedure was used in the present research because in an interactive paradigm it would have been difficult to control the level of suspicion and cognitive capacity; that is, the type and number of interactions between detector and communicator would have influenced both suspicion and cognitive load. Finally, the development of the truth bias heuristic is not addressed by the present research. Perhaps the evolution of the truth bias heuristic is related to other general heuristics such as availability. It may be that originally, people encode most communications as truthful because of difficulties involved in processing false information (Gilbert, 1993; Gilbert, Krull, dc Malone, 1990) and the cognitive and practical difficulties involved in confirming deceptions (Wegner, Coulton, & Wenzlaff, 1985). That is, in most social situations, people do not receive feedback regarding the truthfulness of a communication, making it difficult to confirm a deception. If these many instances of truth judgements are easy to bring to mind, then this may lead people to assume that the likelihood of truthful communication is very high (cf. Stiff et al., 1992). This is a question for future research. Notes 1. It is possible that people simply assume communications are truthful without any processing (heuristic or systematic) of the communication or situation. However, we do not believe that the truth bias effects found in the literature could occur without any processing taking place. If people simply applied the rule that people are truthful without any processing of the communication or situation, then all communication should be judged as truthful. The literature clearly indicates that people are influenced by elements of the communication and situation (e.g., McCornack & Parks, 1986; Millar & Millar, 1995). 2. Several further analyses were conducted on all of the dependent measures to examine the impact of a number of control variables. The order of truth and deceptive communications, and the order of the high and low capacity manipulation, were added separately as factors to each of the analyses. The order variables were not involved in any significant effects. Also, to examine whether the participants' responses changed from the beginning to the end of the session, trials was added as a factor to each analysis. The trials variable was not involved in any significant effects. Finally, a set of analyses were conducted to examine whether the positive skew associated with using count data influenced the results. All the analyses on count data reported in this article were performed with the data subjected to a square transformation (see Kirk, 1986, and Winer, 1971, for discussion of transformations). These analyses produced the same pattern of significant effects. The means and analyses reported in the text have been presented in their original form. 3. Five of the participants failed te complete the cognitive responses measure. References Brandt, D. R., Hocking, J., & Miller, G. (1980). The truth-deception attribution: Effects of familiarity on the ability of observers to detect deception. Human Communication Research, 6, 99-110. Brandt, D. R., Miller, G., & Hocking, J. (1982). Familiarity and lie detection: A replication and extension. Western Journal of Speech Communication, 46, 276-290. Buller, D. B., & Burgoon, J. K. (in press). Interpersonal deception theory. Communication Theory. Buller, D. B., Strzyzewski, K. D., & Comstock, J. (1991). Interpersonal deception: I. Deceivers' reactions to receivers' suspicions and probing. Communication Monographs, 58, 1-24. Buller, D. B., Strzyzewski, K. D., & Hunsaker, F. (1991). Interpersonal deception: II. The inferiority of conversational participants as deception detectors. Communication Monographs, 58, 25-40. Burgoon, J., Buller, D., Ebesu, A., & Rockwell, P. (1994). Interpersonal deception: V. Accuracy in deception detection. Communication Monographs, 61, 303-325. Chaiken, S., Liberman, A., & Eagly, A. (1989). Heuristic and systematic processing within and beyond the persuasion context. In J. S. Uleman & J. A. Bargh (Eds.), Unintended thought (pp. 212-252). New York: Guilford. Clark, H., & Clark, E. (1977). Psychology and language. New York: Harcourt Brace Jovanovich. De Paulo, B. M., Kirkendol, S., Tang, J., & O'Brien, T. (1988). The motivational impairment effect in the communication of deception: Replications and extensions. Journal of Nonverbal Behavior, 12, 177-202. DePaulo, B. M., Stone, J. I., & Lassiter, G. D. (1985). Deceiving and detecting deceit. In B. R. Schlenker (Ed.), The self and social life (pp. 323-370). New York: McGraw-Hill. DePaulo, P J. (1988). Research on deception in marketing communications: Its relevance to the study of nonverbal behavior. Journal of Nonverbal Behavior, 12, 253-273. DePaulo, P. J., & DePaulo, B. M. (1989). Can deception by salespersons and customers be detected through nonverbal behavioral cues? Journal of Applied Social Psychology, 19,1552-1577. deTurck, M. A., & Harszlak, J., Bodhorn, D., & Texter, L. (1990). Deception and arousal: Isolating the behavioral correlates of deception. Human Communication Research, 12, 181-201. Ekman, P (1985). Telling lies: Cues to deceit in the marketplace, marriage, and politics. New York: W W Norton. Ekman, P, & O'Sullivan, M. (1991). Who can catch a liar?American Psychologist, 46, 913-920. Festinger, L., & Maccoby, N. (1964). On resistance to persuasive communica tions. Journal of Abnormal and Social Psychology, 68, 359-366. Fiedler, K., & Walka, I. (1993). Training lie detectors to use nonverbal cues instead of global heuristics. Human Communication Research, 20, 199-223. Gilbert, D. (1989). Thinking lightly about others: Automatic components of the social inference process. In J. S. Uleman & J. A. Baught (Eds.), Unintended thought (pp. 189-210). New York: Guilford. Gilbert, D. (1993). The assent of man: Mental representation and control of belief. In D. Wegner & J. Pennebaker (Eds.), Handbook of mental control (pp. 57-87). Englewood Cliffs, NJ: Prentice Hall. Gilbert, D., Krull, D. S., & Malone, P. (1990). Unbelieving the unbelievable: Some problems in the rejection offalse information. Journal of Personality and Social Psychology, 59, 601-613. Gilbert, D., McNulty, S. E., Giuliano, T., & Benson, J. (1992). Blurry words and fuzzy deeds: The attribution of outcome behavior. Journal of Personality and Social Psychology, 62,18-25. Greenwald, A. G. (1968). Cognitive learning, cognitive response to persuasion, and attitude change. In A. G. Greenwald, T. C. Brock, & T. M. Ostrom (Eds.), Psychological foundations of attitude (pp. 148-170). New York: Academic Press. Grice, H. P. (1975). Logic and conversation. In P Cole & J. Morgan (Eds.), Syntax and semantics (Vol. 3, pp. 41-58). New York: Seminar Press. Hilton, J., Fein, S., & Miller, D. (1993). Suspicion and dispositional inference. Personality and Social Psychology Bulletin, 19, 501-512. Kalbfleisch, P. (1992). Deceit, distrust, and the social milieu: Application of deception research in a troubled world. Journal of Applied Communication Research, The behavioral correlates of sanctioned and unsanctioned deceptive communication Journal of Nonverbal Behavior New York Fall 1998 -------------------------------------------------------------------------------- Authors:                  Thomas H Feeley Authors:                  Mark A deTurck Volume:                   22 Issue:                    3 Pagination:               189-204 ISSN:                     01915886 Subject Terms:            Behavior                           Lying                           Cheating                           Honesty Abstract: Ninety-three students were randomly assigned to one of three veracity conditions: (1) truthful, (2) unsanctioned-deceptive, or (3) sanctioned-deceptive.  Participants in the truthful condition were honest when reporting strategies used when attempting to unscramble a series of anagrams. Copyright Human Sciences Press, Inc. Fall 1998 Full Text: ABSTRACT: Ninety-three students were randomly assigned to one of three veracity conditions: (1) truthful, (2) unsanctioned-deceptive, or (3) sanctioned-deceptive. Participants in the truthful condition were honest when reporting strategies used when attempting to unscramble a series of anagrams. Students in the sanctioneddeceptive and unsanctioned-deceptive condition were implicated to cheat (by looking at the answers) on the anagram task by a research confederate. Students in the sanctioned condition were asked by an experimenter to conceal their cheating by lying to the interviewer about their "high score" on the anagram task whereas students in the unsanctioned condition were not given any instructions about how to answer the interviewer's questions regarding their anagram-solving strategies. All interviews were videotaped and verbal and nonverbal behaviors were analyzed by four student-coders. Results indicated that unsanctioned deceivers, when compared to sanctioned deceivers, made fewer speech errors and speech hesitations, gazed less at their targets, and used fewer other references. Introduction People tell lies for a variety of reasons. Some lie to protect themselves from punishment or embarrassment (e.g., Stiff, Corman, Krizek, & Snider, 1994); some lie to gain a monetary or social reward under false pretense (e.g., Feeley & deTurck, 1995; Stiff & Miller, 1986); some lie to project a false image of themselves; others lie to protect another's feelings (e.g., Bell & DePaulo, 1996; DePaulo & Bell, 1996); and still others lie just for the sake of lying; some find it exciting and challenging. Ekman (1991) calls this duping delight. Scholars of deception have long examined the behaviors associated with deceptive communication (for meta-analytic reviews see DePaulo, Stone, & Lassiter, 1985; Kraut, 1980; Zuckerman, DePaulo, & Rosenthal, 1981; Zuckerman & Driver, 1985). Results from over fifty studies have yielded some promising results. When compared to truth tellers, deceivers tend to commit more speech errors and speech hesitations, speak for a shorter length of time, use more adaptors, and convey less immediacy. Authors of experiments analyzing the behavioral correlates of deception have used many different scenarios to study deceptive communication. For example, an early study by DePaulo and Rosenthal (1979) is not atypical of many of the paradigms employed to solicit deception. Senders in the DePaulo and Rosenthal study were instructed to honestly and dishonestly describe people they like and people they dislike. A later experiment by deTurck and Miller (1985) used an entirely different paradigm. Participants in the deTurck and Miller study were induced by a research confederate to cheat (or not to cheat in the truthful condition) on a dot-estimation task. After the participants estimated the number of dots on ten cards they were asked by an experimenter to describe the strategies they used while estimating the number of dots on each card. Participants who were implicated were forced to either 'fess-up' or lie about their remarkable performance on the dot-estimation task. These two paradigms are popular experimental designs used to examine deceptive behavior. It is interesting to note that participants in each of the two paradigms may experience completely different motivations when they tell lies. The question then becomes: do the dynamics of deception differ across differing motivational conditions? Will deceivers behave differently when they are motivated to lie on their own volition or when they are motivated to lie to satisfy the requirements of the experiment(er)? The present research will address these questions. Miller and Stiff (1993) considered lies of the former type to be unsanctioned lies whereby senders decided on their own whether to lie or tell the truth. Lies sanctioned by the experimenter for purposes of research are labeled sanctioned lies. The following discussion will distinguish motivational features of each type of deception. Unsanctioned Deception In deciding whether or not to deceive or tell the truth, communicators must balance the consequences of detection with the perceived benefits of telling the truth (Feeley, 1997a; Stiff et al., 1994; Stiff & Miller, 1986). When deception is sanctioned by the experimenter, the sender does not make the decision to lie or tell the truth-the decision is made for the sender. Unsanctioned lies seek to capture the necessary elements that characterize heightened arousal and increased cognitive effort: fear of punishment, accountability, and motivation to deceive effectively. For example, a study by Stiff et al. (1994) offered individuals an opportunity to cheat on a class quiz and many students chose the option to cheat. After the quiz, students were interviewed and asked specific questions about their honesty on the quiz. Participants who chose to lie about the cheating did so on their own accord, were motivated to conceal their illicit behavior, and were solely responsible for their communicative behavior (Stiff et al., 1994). Similar paradigms have been used to increase accountability and deceiver motivation (deTurck & Miller, 1985; Dulaney, 1982; Exline, Thibaut, Hickey, & Gumpert, 1971; Feeley, deTurck, & Young, 1995; Stiff & Miller,1986); however, most studies in deception have employed the sanctioned lie manipulation. Sanctioned Deception When one thinks of lying, the types of deception that spring to mind are usually stories of sordid defendants on trial for murder or a shifty husband facing accusations of infidelity. While these types of deception are certainly exciting, arousing, and interesting, they seem to be the exception rather than the norm in everyday social interaction. Most lies require little planning, are relatively uninvolving and have minimal consequences if uncovered (DePaulo, Kashy, Kirkendol, Wyer, & Epstein, 1996; Kashy & DePaulo, 1996). Recent research by DePaulo et al. (1996) found participants to have mild reactions to recent lies they have told. Using a diary method, participants reported that their lies were not very serious, they did not plan them much and did not worry much about getting caught. If most lies require little planning, require minimal effort, and provoke little fear in the sender of being caught, then perhaps laboratory studies using sanctioned lies create more experimental realism than some scholars would contend (e.g., Feeley et al., 1995). Proponents of the sanctioned lie hedge their position on the ecological validity of the lies used during research experiments. That is, the lies told in the laboratory mirror most lies told in everyday communication: exaggerated expressions of liking and false reports of attitudes and likes about an issue or another person (cf. Burgoon & Buller, 1994; DePaulo & Rosenthal, 1979; Levine & McCornack, 1992). The critical difference between sanctioned and unsanctioned deception manipulations rests in each paradigm's ability to motivate individual's to lie successfully. Lies told in each of the two paradigms, it is argued here, involve different motivations to lie. For example, participants in most unsanctioned paradigms are motivated to lie to gain a reward (e.g., to win a contest or to acquire a higher quiz grade), or to escape punishment (e.g., get caught cheating by an experimenter), whereas participants in most sanctioned lie paradigms are motivated to escape detection for entirely different reasons. For example, participants in a study by DePaulo, Lanier, and Davis (1983) were told that the ability to lie successfully was extremely important and often associated with professional success. Several studies (e.g., Buller & Aune, 1987; Burgoon & Buller, 1994) failed to offer any motivation to senders' to deceive other than for research purposes. Thus, it appears the motivation to deceive would be higher (or at least different) for unsanctioned liars than for their sanctioned counterparts. This contention, however, has yet to be supported by any empirical evidence to date. A goal of the present study to examine the motivational differences between the two deception conditions. More specifically, the present research examines the behavioral correlates of sanctioned and unsanctioned lies. Unsanctioned liars may exhibit a different behavioral profile than sanctioned liars. It should be noted that Zuckerman and Driver's (1985) meta-analytic review examined behavioral differences between highly motivated and less highly motivated deceivers. They considered senders highly motivated if promised either a monetary reward for doing well (e.g, deTurck & Miller, 1985) on the deception task or told that the ability to lie successfully was associated with some type of skill (e.g., DePaulo et al., 1983)-all other studies were considered low motivations to deceive. Their results found that when compared to less highly motivated senders, highly motivated senders exhibit less eye gaze, fewer blinks, fewer head movements, fewer postural shifts, a shorter response length, a slower speech rate, greater pitch, and more negative statements. However, the interpretation of these results should be only speculative. These results were collapsed across several different motivational paradigms and almost all of the manipulations employed sanctioned lies. The Current Study The current experiment is an attempt to examine the behavioral differences between truth tellers, unsanctioned deceivers, and sanctioned deceivers. Earlier it was suggested that the different motivations for communicating may significantly influence the behavioral profile of the communicator. Sanctioned liars would most likely be motivated to control their self-presentation in an effort to dupe the interviewer or to appear cooperative with the experimental requirements. Unsanctioned liars, who are not given permission to lie, may be motivated to lie to gain a reward (two compact disks), or to escape punishment for cheating on an experimental task. It would be interesting to examine if the two differing motivations elicit different behavioral profiles. If they do, students of deception may need to re-examine the behavioral profiles of deceptive communicators with careful consideration given to deceiver motivation. Initial efforts toward this consideration have been advanced earlier by Zuckerman, DePaulo, and Rosenthal (1981) in their meta-analysis. Method Participants and Procedure Two hundred twelve students from an introductory communication class at a large eastern university participated in partial fulfillment of a course requirement. Participants signed up for a study advertised as, "a study in examining conversational management techniques in an interviewing situation." Students were required to sign up for in one hour time blocks in groups of four. It was required that students sign up with strangers.' Along with a research confederate, the group of four students were given a prebriefing statement that explained the purpose of the study. Participants were told that the purpose of the study was to "examine interviewing behavior during abstract problem-solving." The abstract problem was solving anagrams or unscrambled words. Participants were then told they would be divided into a group of three interviewees (EEs) and a group of two interviewers (ERs). The goal of the EE was to unscramble as many of the anagrams as possible. The goal of the ER was to interview and evaluate the EE's conversational management skills during the interview. After reading the prebriefing statement, participants were asked to sign a consent form. After randomly dividing the groups, each group was escorted to adjacent research room for further instructions.2 Interviewee Manipulation Interviewees were seated and given a prebriefing form that described their role in the experiment. EEs were reminded that the each person in the group which solves the most anagrams would be given a reward of two compact disks of their choice from a local music store. EEs were then required to complete a two-minute trial task consisting of twenty anagrams. The experimenter left the room during the two-minute trial task. After the time elapsed, the EEs were given feedback regarding their performance on the trial task. This was done by the experimenter who consulted a folder located on a desk next to the table where the EEs were seated. After questions were addressed, the five-minute, 35 anagram experimental task began. The experimenter then left the room while the EEs began the anagram task. At this point in the experiment the veracity manipulation was introduced. The veracity manipulation divided EEs into three conditions: 1 ) truthful, 2) unsanctioned lie, and 3) sanctioned lie. Confederates in the truthful condition were instructed to assist on only one or two of the answers and no more. In the unsanctioned and sanctioned lie conditions, the confederate pretended to grow tired after one minute of trying to unscramble the words. The confederate then proceeded to open the experimenter's folder where the answers to the practice task were located. After opening the folder, the confederate proceeded to share some of the answers with the group (10-12). In instances where the group was reticent to use the answers, the confederate was instructed to write down the answers him/herself. After the five minutes elapsed, the experimenter re-entered the EE research room and explained that EEs would now be interviewed individually and asked questions about the strategies they used to achieve their anagram score. Students in the unsanctioned condition were immediately escorted to the interview room. Students in the sanctioned condition were not immediately escorted to the research room. Instead, students were debriefed about the cheating manipulation by the experimenter. Specifically, EEs in the sanctioned lie condition were told that "a goal of this study is to examine some of the strategies used by individuals when telling lies." Participants were then asked to lie to the ER about why they performed so well on the abstract task. Any EEs who felt uncomfortable about lying to the ER were excused from the study and offered full research credit for their participation (as promised in the prebriefing).3 To sum up, EEs in the truthful condition were not exposed to any cheating and subsequently were telling the truth when asked about their strategies while unscrambling the anagrams. EEs in the unsanctioned lie condition were exposed to cheating and must choose to lie or tell the truth regarding their good performance on the anagram task. EEs in the sanctioned lie condition, however, were 'caught' by the experimenter and asked to lie for the purposes of the experiment. Interviewer Instructions While the EEs completed the anagram task, ERs were given a prebriefing form that described their role in the experiment. ERs were instructed to become familiar with the eight interview questions while the EEs completed the anagram task. ERs were also told they would be asked to assess the EE on several dimensions after the interview was completed. ERs were told that in many interview situations individuals are often dishonest in reporting their abilities on certain problem-solving tasks. Furthermore, ERs were told that fifty percent of the EEs in the experiment were given some of the answers to the anagram task and thus would be lying or embellishing while describing the strategies and methods they reported about solving the anagrams.' Interviews After the ERs and EEs completed their separate tasks, they were reunited in the main research room for interviewing. Interviews were conducted in pairs and the two EEs and one ER who were not interviewing were asked to wait in a room outside of the interviewing room. Interviews lasted, on the average, four to five minutes and videotaped with the participants' permission. ERs asked EEs four baseline questions (What is your name?, What is your major and what do you like best about your major?, Where is your hometown?, and What do you like best about the University?) and four questions about the strategies used to unscramble the anagrams (What strategies did you use to unscramble the anagrams?, How do you explain your score on the anagram task?, Did anything else happen in the research room to help your ability to perform well on the abstract task?, How did the other group members help on the problem-solving task?). After each interview, ERs and EEs completed a questionnaire that asked them to assess their behavior and the other's behavior on several dimensions.5 It is important to note that there were eight instances (four female and four male EEs) in which EEs in the unsanctioned lie condition did not choose to lie about why they did so well on the task. These sixteen students (8 ERs and 8 EEs) were excused from the study. This dropout rate is a nontrivial number; when one considers that no students in the sanctioned lie condition and truth condition elected to discontinue the study. This may affect the interpretation of the results (i.e., random assignment, strong inference). It could be suggested that the students in the unsanctioned lie condition represent only those students who elected to lie. This differential drop-out rate may explain why the majority of research paradigms in deception use the sanctioned paradigm. After ERs and EEs completed their separate self-report measures, students were debriefed thoroughly regarding the purpose of the experiment. Participants were given a debriefing statement that explained the purpose of the study, the goals of the study, and cited previous research in deception, should participants be interested in reading further on the subject. After reading the debriefing and having their concerns addressed, participants completed a consent form that requested permission to use the selfreport data and videotaped interview for research and educational purposes. Coding the Videotapes Four students served as coders and were paid for the assistance. Coders worked independently and were asked to code the participants along fourteen behaviors. Each coder was given one-half of the participants to code. Thus, each individual on the videotapes was coded by two different student-coders. Three verbal behaviors were analyzed (number of selfreferences, number of other-references, and number of total words) along with five vocal behaviors (number of speech errors, number of speech hesitations, number of pauses, speech length, and response latency) and six visual behaviors (length of adaptors, eye gaze length, number of foot gestures, number of hand gestures, number of postural shifts, and amount of smiling). Coders were given a training session which defined and demonstrated the fourteen cues, defined by Zuckerman and Driver (1985). The cues were defined using definitions offered by Zuckerman and Driver (1985). Listed below you will find the definition for each cue. Inter-coder reliabilities using Pearson correlations for each cue were: adaptors .61; message duration .87; eye gaze .73; foot gestures .73; hand gestures .93; pauses .97; postural shifts .80; response latency .66; speech errors .86; speech hesitations .88; self-references ..88; number of words .93; other references .80. Speech errors, speech hesitations, self-references, number of words, and other references were coded from written transcripts of the videotapes. All counts and amounts of behaviors were divided by interview length to control for amount of time speaking. Speech rates were computed by dividing the speech length by the number of total words. Verbal Behaviors 1. Self-references: measured by frequency of references to self. 2. Other-references: measured by frequency of references to other (e.g., he, her, she, him). 3. Number of words: measured by total number of words. Vocal Behaviors 4. Speech length: total amount of time spent speaking. 5. Speech hesitations: measured by frequency of filled pauses (e.g., ers, uhms, ahs). 6. Response latency: measured by amount of time between the end of a question and the beginning of an answer. 7. Speech rate: the total number of words divided by the duration of the message. 8. Pause: the total number of silent breaks in speech. 9. Speech errors: measured by frequency of nonfluencies, grammatical errors, word and/or sentence repetition, sentence change, sentence incompletion, or slips of the tongue. Visual Behaviors 10. Adapters: amount of time spent using self-manipulations (e.g., rubbing hand on leg). 11. Eye gaze: measured by duration of time spent looking directly at the interviewer. 12. Foot gestures: measured by total amount of time spent moving feet. 13. Hand gestures: measured by total amount of time spent moving hands. 14. Postural shifts: total number of postural shifts. Results All analyses used 95 EEs and gender breakdown was equivalent across experimental conditions, x^sup 2^ = 8.42, df = 12, p = .80. Ninety-two males and ninety-eight females participated. In some instances data was not codable for subjects' behavior and reduced the degrees of freedom in the statistical tests (listwise deletion was used) (e.g., unclear audio, camera shot cut off hands, etc.). Manipulation Check To test the deception manipulation, two one-way ANOVAs were performed with veracity serving as the between-subjects factor and reported honesty and number of correct answers serving as the criterion variables. Results indicate a significant difference with respect to EE reported honesty, F (2, 92) = 119.32, p < .001, eta^sup 2^ = .72. Scheffe post-hoc analysis of the means found significant differences between all three veridical conditions. Participants in the truthful condition (M = 7.57) reported more honesty (on an 8-point scale) than participants in the unsanctioned lie condition (M = 5.81), who reported more honesty than sanctioned liars (M = 2.94). It is interesting to note that sanctioned liars and unsanctioned liars reported different levels of honesty on the anagram task. It is likely that since unsanctioned liars voluntarily lied during the interview, they decided to maintain their deception throughout the questionnaire. A significant difference was also reported between veracity conditions on the number of correct answers on the anagram task, F (2, 92) = 198.83, p < .001, eta^sup 2^ = .81. Participants in the unsanctioned lie condition (M = 19.23) and sanctioned lie condition (M = 19.74) unscrambled significantly more anagrams than did students in the truthful condition (M = 10.49). Taken together, these results indicate that the deception manipulation was successful. Behavioral Differences by Veracity Condition A goal of the present study was to test for behavioral differences between sanctioned deceivers, unsanctioned deceivers, and truthtellers. To test for differences between each deceptive paradigm, and to test for differences between each deception manipulation and a truthful sample of communicators, separate one-way ANOVA tests were conducted with the verbal and nonverbal behaviors servings as the criterion variables and veracity (unsanctioned/sanctioned/truthful) serving as the between-subjects independent factor. Table 1 lists the means and standard deviations by veracity condition and F test values and effect sizes. Significant differences were found with speech errors, F(2, 89) = 19.32, p < .001, eta^sup 2^ = .30, speech hesitations, F(2, 89) = 3.39, p < .05, eta^sup 2^ = .07, and other references, F (2, 89) = 3.05, p < .05, eta^sup 2^ = .06. Results of LSD post-hoc tests revealed only one significant behavioral difference between truthful communicators and unsanctioned deceptive communicators. Truthful senders committed more speech errors than unsanctioned deceivers. Regarding differences between truth tellers and sanctioned deceivers, truth tellers committed fewer speech errors and spoke for a greater length of time than sanctioned deceivers. To test for verbal and nonverbal difference between sanctioned and unsanctioned deceivers and truth tellers, contrast weights were assigned to compare means between each condition. Table 2 outlines the contrast differences for each behavior and the significance of each difference. For the first column, contrast weights of + 1 and -1 were assigned to truthful participants and sanctioned lie participants respectively. In the second column, contrast differences between truthful (+1 weight) and unsanctioned lies (-1 weight) were analyzed and finally in the third column truthful behaviors (+ 2 weight) were compared to both sanctioned (-1 weight) and unsanctioned (-1 weight) behaviors. Discussion The results of the previous experiment indicate the type of manipulation used to elicit deceptive communication in research makes a significant difference. This study compared sanctioned deception, that is, lying to satisfy the requirements of the experimenter, to unsanctioned deceptionlying without explicit permission from the experimenter. Unsanctioned deceivers used less adaptors, eye gaze, self references, other references and committed fewer speech errors and speech hesitations than sanctioned deceivers. At the same time, unsanctioned deceivers shifted more while answering questions. These results beg the question: Why should unsanctioned and sanctioned liars behave differently? To address this question, we offer two possible explanations: the motivational impairment effect and communication appraisal. It was hinted at earlier that an individual's motivation to deceive may differ as a function of sanctioning. That is, the motivation to escape detection may be greater with unsanctioned liars than with sanctioned liars. In this study, unsanctioned liars were ostensibly lying for one or two possible reasons: (1 ) to perform well on the anagram task to win the two compact disk prizes offered by the experimenter or (2) to avoid being caught cheating by the experimenter. Compare these motivations to the motivation of the sanctioned liar who lied primarily to satisfy the requirements of the study. DePaulo and colleagues (e.g., DePaulo & Kirkendol, 1989; DePaulo et al., 1983) have found that increased motivation to lie has deleterious effects on deception success. Stated differently, lies told by motivated liars are more readily detected than lies told by unmotivated liars when receivers are exposed to the sender's nonverbal behavior (i.e., audio, audiovisual, visual). DePaulo et al. (1983) dubbed this the motivational impairment effect. [IMAGE TABLE] Captioned as: TABLE 1 [IMAGE TABLE] Captioned as: TABLE 2 Were Unsanctioned Liars Motivationally Impaired? To explain the effects of motivation on deception detection success, DePaulo et al. (1985) advanced the behavioral control explanation. The behavioral control explanation states that highly motivated senders attempt to control their nonverbal presentation in an effort to appear truthful. Highly motivated senders, however, may be less successful at controlling their nonverbal behaviors (i.e., visual and vocal behaviors) and more successful at controlling their verbal behaviors. This would explain why motivated deceivers experience more deceptive success when receivers are exposed to only the verbal channel of communication (DePaulo, Kirkendol, Tang, & O'Brien, 1988). Unsanctioned liars in the present study may have attempted to control their entire nonverbal presentation but were only partially successful. That is, unsanctioned liars, when compared to sanctioned liars, were successful controlling the verbal channel (less speech errors, less speech hesitations, less self and other references) but less successful at controlling their nonverbal presentation (more postural shifts, less eye gaze). Unfortunately, the study presented herein failed to account for deceiver motivation or deceiver arousal, thus, the contention that motivation to deceive is higher for unsanctioned liars than for sanctioned liars is speculative until further data proves otherwise. Are Sanctioned Liars More Motivated? Perhaps we were mistaken to conceptualize unsanctioned liars as more motivated than sanctioned liars. It may be that sanctioned liars were more motivated to lie than unsanctioned liars. Participants in the sanctioned lie condition may have viewed the opportunity to lie without consequence as a challenge and therefore found it arousing and exciting. Research by Tomaka, Blascovich, Kelsey, and Leitten (1993) has shown that individuals often appraise a situation in one of two ways: as threatening or as challenging. Threat appraisals are those in which the perception of danger exceeds the perception of abilities or resources to cope with the stressor. By contrast, challenge appraisals are those in which the perception of danger does not exceed the perception of resources or abilities to cope (Tomaka et al., 1993). The unsanctioned liar may have seen the situation as threatening and the sanctioned liar may have seen the situation as challenging. Research has shown that these appraisals often determine stress levels, behavior, and success on a task (Tomaka, Blascovich, & Kelsey, 1992; Tomaka et al., 1993; Young & deTurck, 1996). Future Research The next logical step to build on the present findings would be to replicate the original DePaulo et al. (1983) experiment with one exception: senders' lies should be sanctioned or unsanctioned and senders' motivation should be manipulated. Thus, sanctioning would be crossed with sender motivation. It would also be interesting to investigate how students in each deceptive paradigm appraise the communication situation using Tomaka et al.'s (1993) appraisal measure. If, in fact, unsanctioned liars are more highly motivated than sanctioned liars, detection rates should be higher when receivers are exposed to the nonverbal channel (audio, audiovisual, visual) than when receivers are exposed only to the verbal channel (written). Notes 1. Participants in the experiment were limited to strangers to avoid a possible confound with respect to relational familiarity. Research suggests that the dynamics of deception may be differ as a function of relational familiarity (cf. Comadena, 1982; Levine & McCornack, 1992; McCornack & Parks, 1986). 2. This procedure is similar to a technique first introduced by Exline et al., 1971 whereby participants were implicated for cheating on a dot-estimation task (see also deTurck & Miller, 1985; Dulaney, 1982; Feeley & deTurck, 1995; Stiff & Miller, 1986). 3. No participants left the experiment, in fact, most participants seemed excited about the opportunity to lie. 4. This was done in an effort to raise suspicion on the part of detectors. As it was, ERs still evaluated EEs to be telling the truth 83% of the time. The results of lie detection accuracy rates by veracity condition can be found in Feeley (1997b). 5. Results from EEs' self-reports indicated no differences in involvement, nervousness, impression management, competence, and fluency across experimental conditions; for an extended discussion see Feeley (1997b). References Bell, K.L., & DePualo, B.M. (1996). Liking lying. Basic and Applied Social Psychology, 18, 243-266. Buller, D. B., & Aune, R. K. (1987). Nonverbal cues to deception among intimates, friends, and strangers. Journal of Nonverbal Behavior, 11, 269-290. Burgoon, J. K., & Buller, D. B. (1994). Interpersonal deception: III. Effects of deceit on perceived communication and nonverbal behavior dynamics. Journal of Nonverbal Behavior, 18, 155-184. Cohen, J. (1969). Statistical power analysis for the behavioral sciences. New York: Academic Press. Comadena, M. E. (1982). Accuracy in detecting deception: Intimate and friendship relationships. In M. Burgoon (Ed.), Communication Yearbook 6 (pp. 446-472) Beverly Hills, CA: Sage. DePaulo, B. M., & Bell, K. L. (1996). Truth and investment: Lies are told to those who care. Journal of Personality and Social Psychology, 71. DePaulo, B. M., Kashy, D. A., Kirkendol, S. E., Wyer, M. M., & Epstein, J. E. (1996). Lying in everyday life. Journal of Personality and Social Psychology, 70, 979-995. DePaulo, B. M. & Kirkendol, S. E. (1989). The motivational impairment effect in the communication of deception. In J. Yuille (Ed.), Credibility assessment (pp. 51- 70). Norwell, MA: Kluwer. DePaulo, B. M., Kirkendol, S. E., Tang, J., & O'Brien, T. P. (1988). The motivational impairment effect in the communication of deception: Replications and extensions. Journal of Nonverbal Behavior, 12, 177-202. DePaulo, B. M., Lanier, K., & Davis, T. (1983). Detecting the deceit of the motivated liar. Journal of Personality and Social Psychology, 45, 1096-1103. DePaulo, B. M., & Rosenthal, R. (1979). Telling lies. Journal of Personality and Social Psychol ogy, 37, 1713-1722. DePaulo, B. M., Stone, J. I., & Lassiter, G. D. (1985). Deceiving and detecting deceit. In B.R. Schlenker (Ed.), The self and social life (pp. 323-370). New York: McGraw-Hill. deTurck, M. A., & Miller, G. R. (1985). Deception and arousal: Isolating the behavioral corre lates of deception. Human Communication Research, 12, 181 201. Dulaney, E. F. (1982). Changes in language behavior as a function of veracity. Human Communication Research, 9, 75-82. Ekman, P. (1991). Telling lies. New York: W. W. Norton. Exline, R. E., Thibaut, J., Hickey, C. B., & Gumpert, P. (1970). Visual interaction in relation to Machiavellianism and an unethical act. In R. Christie & F. L. Geis (Fds.), Studies in Machiavellianism (pp. 53-75). New York: Academic Press. Feeley, T. H. (1997a, November). Choosing deceptive communication. Paper presented to National Communication Association, Chicago, IL. Feeley, T. H. (1997b). Exploring sanctioned and unsanctioned lies in interpersonal deception. Communication Research Reports, 13, 164-173. Feeley, T. H. & deTurck, M. A. (1985). Global cue usage in behavioral lie detection. Communication Quarterly, 43, 420-430. Feeley, T. H., deTurck, M. A., & Young, M. J. (1995). Behavioral familiarity in lie detection. Communication Research Reports, 12, 160-169. Fiedler, K., & Walka, I. (1993). Training lie detectors to use nonverbal cues instead of global heuristics. Human Communication Research, 20, 199-223. Kraut, R. (1980). Humans as lie detectors: Some second thoughts. Journal of Communication, 30, 209-216. Levine, T. R., & McCornack, S. A. (1992). Linking love and lies: A formal test of the McCornack and Parks model of deception detection. Journal of Social and Personal Relationships, 9, 143-154. McCornack, S. A., & Parks, M. R. (1986). Deception detection and the other side of trust. In M. L. Mclaughlin (Ed.), Communication Yearbook 9 (pp. 377-389). Beverly Hills, CA: Sage. Miller, G. R., & Stiff, J. B. (1993). Deceptive communication. Newbury Park, CA: Sage. Stiff, J. B., Corman, S. R., Krizek, R., & Snider, E. (1994). Individual differences and changes in nonverbal behavior: Unmasking the changing faces of deception. Communication Research, 21, 555-581. Stiff, J. B., & Miller, G. R. (1986). "Come to think of it . . .": Interactive probes, deceptive communication, and deception detection. Human Communication Research, 12, 339357. Tomaka, J., Blascovich, J., & Kelsey, R. M. (1992). Effects of self-deception, social desirability, and repressive coping on psychophysiological reactivity to stress. Personality and Social Psychology Bulletin, 18, 616-624. Tomaka, J., Blascovich, J., Kelsey, R. M., & Leitten, C. L. (1993). Subjective, physiological, and behavioral effects of threat and challenge appraisal. Journal of Personality and Social Psychology, 65, 1-13. Young, M. J., & deTurck, M. A. (1996, November). The effects of stress and coping on deceptive communication. Paper presented to Speech Communication Association Convention, San Diego, CA. Zuckerman, M., DePaulo, B. M., & Rosenthal, R. (1981). Verbal and nonverbal communication of deception. In L. Berkowitz (Ed.), Advances in experimental social psychology (Vol. 14, pp. 1-59). New York: Academic Press. Zuckerman, M., & Driver, R. (1985). Telling Lies: Verbal and nonverbal correlates of deception. In A. W. Siegman & S. Feldstein (Eds.), Nonverbal communication: An integrated perspective (pp. 129-147). Hillsdale, NJ: Lawrence Erlbaum. Thomas H. Feeley, Department of Communication, State University of New York at Geneseo. Mark A. deTurck, Department of Communication, State University of New York at Buffalo. This paper is based on the first author's doctoral dissertation directed by the second author. This research was supported by a grant from the Mark Diamond Research Foundation and a grant from the Geneseo Foundation both to the first author. The authors would like to thank Aldert Vrij, Bella DePaulo, Steve McCornack, and Ron Riggio for comments on an earlier version of this manuscript. Address correspondence to Thomas H. Feeley, Department of Communication, State University of New York at Geneseo, Geneseo, NY 14454; E-mail: feeley@uno.cc.geneseo.edu. Reproduced with permission of the copyright owner. Individual differences in hand movements during deception Journal of Nonverbal Behavior New York Summer 1997 -------------------------------------------------------------------------------- Authors:                  Aldert Vrij Authors:                  Lucy Akehurst Authors:                  Paul Morris Volume:                   21 Issue:                    2 Pagination:               87-102 ISSN:                     01915886 Subject Terms:            Nonverbal communication                           Behavior                           Personality                           Cognition & reasoning                           Hands Abstract: The influence of two personality traits on making hand movements during deception was examined.  It was hypothesized that individuals with high public self-consciousness and individuals skilled in controlling their behavior would make fewer hand movements during deception. Copyright Human Sciences Press, Inc. Summer 1997 Full Text: ABSTRACT: This article addresses the influence of 2 personality traits on making hand movements during deception, namely public self-consciousness and ability to control behavior. It was hypothesized that especially individuals with high public self-consciousness and individuals who are skilled in controlling their behavior would make fewer hand movements during deception compared to truth-telling. A total of 56 participants were interviewed twice; in one interview they told the truth and in the other interview they lied. Before the interviews the participants completed a personality inventory to measure their levels of public self-consciousness and ability to control their behavior. The results supported the hypotheses. Some implications of these findings are discussed. Recent studies concerning the relationship between hand movements and deception suggest that a decrease in such movements indicates deception (Davis & Hadiks, 1995; Ekman, 1988, 1989; Hofer, Kohnken, Hanewinkel, & Bruhn, 1992; Vrij, 1995; Vrij, Semin, & Bull, 1996). These findings are in direct contrast to beliefs among observers that liars tend to increase their movements during deception (Akehurst, Kohnken, Vrij, & Bull, 1996; DePaulo, Stone, & Lassiter, 1985; Vrij & Semin, 1996; Zuckerman, DePaulo, & Rosenthal, 1981). The control and cognitive load frameworks have been offered to explain differences in hand movements during deception and truth-telling. According to the control framework, liars try to control their body language to avoid giving off possible nonverbal cues to deception and to make a credible (reliable) impression (DePaulo,1988, 1992; DePaulo & Kirkendol, 1989; Ekman, 1989; Kohnken, 1990). Liars, for instance, believe that movements will make them appear suspicious. Therefore, they will move very deliberately and tend to avoid those movements which are not strictly essential, thus resulting in an unusual degree of rigidity and inhibition. In agreement with this explanation, Vrij et al. (1996) found that decreases in subtle non-functional hand movements during deception were associated with the extent to which the participants had tried to control their behavior. The cognitive load framework (Burgoon, Kelley, Newton, & KeelyDyreson, 1989; Ekman & Friesen, 1972; Kohnken, 1989) emphasizes that deception is a cognitively complex task. It assumes that it is more cognitively difficult to fabricate a plausible and convincing lie consistent with everything the observer knows or might find out than to tell the truth. There is evidence to suggest that people engaged in cognitively complex tasks make fewer hand and arm movements; the cognitive load results in a neglect of body language, reducing overall animation (Ekman & Friesen, 1972). Unfortunately, the majority of deception studies use a between-subjects design in order to investigate behavioral differences between deceivers and truth tellers, that is, some people tell the truth and some people lie and the behavioral differences between the two groups are determined. The fact that research reveals that deception is associated with a decrease in hand movements does not imply that all people show a decrease in these movements during deception. It may well be the case that some people show an increase in hand movements during deception, or show no differences at all between truth-telling and lying. This raises the question: How many people show a decrease in hand movements during deception and how many people show an increase in hand movements during deception? A between-subjects research paradigm cannot answer this question, because in such a paradigm the subject is either lying or telling the truth. Only a within-subjects design, in which a participant takes part in both the honest and deceptive interviews, can offer an answer to this question. If different behavioral patterns occur, that is, if some people show a decrease in hand movements during deception whereas other people show an increase in hand movements during deception, this then raises the question: Which individual differences account for these different behavioral styles during deception? Both of the above questions will be addressed in the present experiment. The first question (how many people show a decrease in hand movements during deception) is so far, to our knowledge, only addressed by Ekman, O'Sullivan, Friesen, & Scherer (1991), who used a within-subjects design in their experiment. They found that 39% of participants showed fewer hand movements during deception than truth-telling, 26% showed more hand movements during deception than truth-telling, and 35% showed no substantial differences in hand movements between both interviews.1 In order to gain further insight into this issue we merged and reanalyzed the datafiles of three experiments using within-subjects designs (Vrij, 1995, 1996; Vrij et al., 1996), in which we examined differences in hand movements between truth tellers and liars (see the Method section for a description of the procedure used in these studies). A total of 218 participants (all college students) participated in the three experiments, and of these 52% showed a decrease in hand movements during deception, 30% an increase in hand movements during deception, and 18% showed no difference in hand movements.2 These data suggest that although more people decrease than increase their hand movements during deception, a substantial number of people exhibit an increase in hand movements during deception or display no behavioral differences at all between truthtelling and deception. These findings again make relevant the question: Which individual differences account for these different behavioral patterns? Deception is a complex social activity. Several authors (Ekman, 1989; Knapp, Hart, & Dennis, 1974; Kohnken, 1989; Kraut & Poe, 1980; Riggio & Friedman, 1983) emphasize that deception evokes stress. Stress can be a result of guilt (feeling guilty over doing things that one is not allowed to do), or a result of fear of being caught. Therefore, deceivers will show a tendency to display nervous behavior (including making hand movements) and observers will expect deceivers to show such behavior and will therefore look for cues of nervous behavior. As a result, a liar, in order not to get caught, has to suppress and avoid showing signs of nervous behavior. Hence, two aspects are important, namely (1) being aware of how to make a credible impression, and (2) possessing the skills necessary to make a credible impression. We may assume that people high in public self-consciousness will know how to make a credible impression. Public self-consciousness refers to the ability of persons to become aware of another's perspective and to act from that perspective (Fenigstein, Scheier, & Buss, 1975). Since observers believe that an increase in hand movements indicates deception (Akehurst et al.,1996; Vrij & Semin,1996), we may expect that individuals high in public self-consciousness will realize this and it is therefore hypothesized that individuals with high public self-consciousness will make fewer hand movements during deception (Hypothesis 1). This hypothesis is based upon the control approach. Also, on the basis of the cognitive load approach, it may be predicted that people high in public self-consciousness especially will make fewer movements during deception, because being concerned about making a good impression on somebody else requires a lot of cognitive attention (Patterson, 1995), which then necessitates a decrease in such attention to one's own movements. People will differ in how successful they are in controlling their behavior. We therefore hypothesized that individuals who are skilled in controlling their behavior will be more successful in suppressing their hand movements during deception than people who are not so skilled in controlling their behavior (Hypothesis 2).3 Several studies have been conducted examining individual differences displayed during deception to date (Exline, Thibaut, Hickey, & Gumbert, 1970; Knapp, Hart, & Dennis, 1974; O'Hair, Cody, & McLaughlin, 1981; Riggio & Friedman, 1983; Siegman & Reynolds, 1983); most of these studies dealt with Machiavellianism and none of them focused on hand movements. At least two studies examined a related issue, namely individual differences in deception ability (Riggio, Tucker, & Throckmorton, 1988; Vrij, 1993). In both studies observers (undergraduate students in Riggio's et al. study, police detectives in Vrij's study) watched videotapes of people who had been instructed either to tell the truth or to lie. After each, the observers indicated whether the person was lying. In both studies the people who appeared on the tape were administered a number of standardized social skill instruments. Vrij found that people high in public self-consciousness made a more credible impression on observers than people low in public self-consciousness. Riggio et al. found that people high in controlling their behavior were more successful deceivers than people low in controlling their behavior. Both studies did not provide information about the behavior shown by the people on the tape and thus an explanation of the findings has to be speculative. An explanation in line with the present hypotheses is that people high in public self-consciousness and people high in controlling their behavior refrained from making hand movements during deception and therefore made a credible impression on observers. Method Subjects A total of 56 participants (college students) participated in the study (20% male). The average age was 24 years (SD = 7 years). Procedure The experiment was conducted at the University of Portsmouth in Portsmouth (United Kingdom). Participants were asked to take part in a study investigating their ability to deceive. Before the experiment started they were requested to fill out a questionnaire concerning background characteristics (gender and age) and personality traits (public self-consciousness and ability to control their behavior). The experimental setting was an interview similar to the ones used in previous studies (Vrij, 1995, 1996; Vrij et al., 1996). Participants were given the following instructions: "We are doing an experiment to investigate people's ability to deceive. In a minute, you will be interviewed twice about the possession of a small set of headphones. You will actually have the set of headphones in your possession during one interview while during the other interview you will not have the set of headphones in your possession. Both times you have to deny the possession of the set of headphones." The order of lying versus telling the truth was counterbalanced. A total of 27 subjects received the set of headphones before the first interview with the request to hide them carefully, while the other 29 subjects received the set of headphones just before the second interview. The latter group had seen the set of headphones prior to the first interview. After the instructions the experimenter took the subject to the interview room. The interviewer (a male lecturer in the Psychology Department of the University of Portsmouth who was unaware of the hypotheses tested in the experiment) asked the subject to take a seat and started the first interview. All interviews were standardized in that the following six questions were asked: (1) "Do you have the set of headphones in your possession?" (2) "Are you telling the truth?" (3) "Tell me exactly what you have in your possession." (4) "You forgot to mention the set of headphones, didn't you?" (5) "Are you telling me that you don't have the headphones in your possession?", (6) "Are you absolutely sure that you are telling me the truth?" After the first interview the subject left the interview room for a short period of time either to return the set of headphones to the experimenter (if the subject was in possession of the set of headphones during the first interview) or to receive the set of headphones (if the subject was not in possession of the set of headphones during the first interview). Next the subject re-entered the interview room for the second interview. The second interview was identical to the first one. Both interviews were videotaped. On the videotapes the subjects' whole bodies were visible. The average length of the honest interviews was 26 seconds (SD = 4 seconds), and the average length of the deceptive interviews was 27 seconds (SD = 5 seconds). Independent Variables The independent variables were (A) the type of interview (lying vs. telling the truth), and (B) the order in which the honest and deceptive interviews were carried out, that is, lying-telling the truth or telling the truthlying. Personality Traits Public self-consciousness was measured with the following six questions using true/false rating scales (Fenigstein et al., 1975): (1) I'm concerned about what other people think of me, (2) I usually worry about making a good impression, (3) I'm concerned about the way I present myself, (4) I'm self-conscious about the way I look, (5) I'm usually aware of my appearance, and (6) One of the last things I do before I leave my house is look in the mirror. The six items were clustered into one scale, "public self-consciousness" (Cronbach's alpha = .73). The ability to control behavior was measured with 16 items using true/false rating scales, derived from Briggs, Cheek, and Buss' (1980) "other-directedness" (items 1 to 6) and "acting" (items 7 to 10) scales and Riggio's (1986) "emotional control" (items 11 to 13) and "social control" (items 14 to 16) scales: (1) In different situations and with different people, I often act like very different persons, (2) In order to get along and to be liked, I tend to be what people expect me to be rather than anything else, (3) I'm not always the person I appear to be, (4) Sometimes I put on a show to impress or entertain people, (5) Even if I am not enjoying myself, I often pretend to be having a good time, (6) I may deceive people by being friendly when I really dislike them, (7) I would probably make a good actor, (8) I have considered being an entertainer, (9) I can make impromptu speeches on topics about which I have almost no information, (10) I can look anyone in the eye and tell a lie with a straight face (as long as it is a white lie), (11) I am able to conceal my true feelings from just about anyone, (12) I am very good at maintaining a calm exterior, even when upset, (13) When I am really not enjoying myself at a social function, I can still make myself look as if I am having a good time, (14) I find it very easy to play different roles at different times, (15) When in a group of friends I am often spokesperson for the group, (16) I can fit in with all types of people, young and old, rich and poor. The sixteen items were clustered into one scale, "ability to control behavior" (Cronbach's alpha = .72). Firstly, we calculated the correlation between both scales, which was not significant r(56) = .05, ns. Secondly, we dichotomized both scales. Those who had a score lower than the mean score (M = 4.53) on the public self-consciousness (PSC) scale were allocated to the "low PSC" group (40% of the subjects), and the others were allocated to the "high PSC" group (60% of the subjects). Similarly, those who had a score lower than the mean score (M = 7.95) on the ability to control behavior (ACB) scale were allocated to the "low ACB" group (50% of the subjects), and the others were allocated to the "high ACB" group (50% of the subjects). In total, 12 subjects were allocated to the "low PSC-low ACB" cell, 16 subjects to the "high PSC-low ACB" cell, 11 subjects to the "low PSC-high ACB" cell and 17 subjects to the "high PSC-high ACB" cell. Dependent Variable The dependent variable was the hand movements displayed by the subjects. These movements were scored in detail by two independent coders using the videotapes. Scoring was conducted by utilizing a coding scheme previously used in other studies (Vrij, 1995, 1996; Vrij et al., 1996). The coders scored the frequency of hand/finger movements (also referred to as hand movements): a hand/finger movement is a movement of a hand or finger without the arm being moved; every single hand/finger movement was scored, with simultaneous movements of more fingers being scored as one movement; continuing movements (rubbing one's hands together and fidgeting) were scored every two seconds. A reliability measure was calculated for the two coders, r = .98, p < .001. These movements, which have been called "subtle hand and finger movements" (Vrij, 1995), differ somewhat from illustrators or adaptors (Ekman & Friesen, 1969). Illustrators are gestures which accompany speech and accent. They were not counted, because these movements include arm movements. For similar reasons, adaptors which include arm movements (scratching the head, face, wrists and so on) were not counted either. We did count, however, adaptors which do not include arm movements. The reported behavioral scores were based on the average scores of the two coders. The frequency of hand movements reported below was calculated on a per minute basis to correct for length of the interview.4 We monitored whether the deception interview evoked stress by asking the participants, after both interviews, to indicate how nervous they felt during (a) the truth- telling interview, and (b) the deception interview. Answers were given on 7-point rating scales, ranging from (1) not at all nervous to (7) very nervous. A mixed-model analysis of variance (ANOVA) with condition (truth-telling vs. lying) and order (truth-telling-lying vs. lying-truth-telling) as the independent factors, and stress as the dependent variable, showed a significant effect for condition. The participants said they felt more stressed during the deception interview (M = 4.79) than during the truth-telling interview (M = 3.43), F(1, 54) = 61.83, p < .01. Results A mixed-model ANOVA with condition (truth-telling vs. lying) and order (truth-telling-lying vs. lying-truth-telling) as the independent factors and hand movements as the dependent variable indicated significant main effects for condition, F(1, 54) = 5.16, p < .04, and order, F(1, 54) = 4.67, p < .05, and a significant condition X order interaction effect, F(1, 54) = 4.89, p < .05. The mean scores regarding the condition factor revealed that, as expected, the subjects showed fewer hand movements during the deceptive interviews (M = 4.63) than during the honest interviews (M = 7.62). The mean scores concerning the order effect showed that the subjects exhibited the most hand movements when deception preceded truthtelling (M = 8.41 vs. M = 3.99). The mean scores and contrast effects regarding the interaction effect revealed that the subjects showed fewer hand movements during the deceptive interviews (M = 5.36) than during the honest interviews (M = 11.47) when deception preceded truth-telling, F(1, 26) = 6.29, p < ..05, whereas the difference in hand movements during the deceptive interviews (M = 3.95) and during the honest interviews (M = 4.03) was not significant when truth-telling preceded deception, F(1, 26) = .00, ns. We found a similar interaction effect in previous studies (Vrij, 1995, 1996). The following explanation may be plausible. The experimental design implied that the students already knew during the second interview which questions would be asked. As a result of this knowledge, the students may have been better able to behave frankly, resulting in unforced hand and finger movement. This reasoning, however, is not completely satisfactory because it implies that the students would feel less tension during the second interview as well, which was the case in our previous experiments but not in this experiment. We also calculated how many subjects showed a decrease or increase during deception by subtracting the number of hand movements made on a per-minute basis during the honest interview from the number of hand movements made on a per-minute basis during the deception interview. The results showed that 54% of participants made fewer movements during the deception interview than during the honest interview, 25% made more movements during the deception interview than during the honest interview, and 21 % showed the same number of movements during both interviews. People who showed the same number of movements during both interviews did not make any hand movement at all during either interview. This categorization (a decrease in hand movements during deception, an increase in hand movements during deception, and no difference in hand movements between interviews) is used in further analyses. In order to test Hypotheses 1 and 2 a log-linear analysis was conducted with hand movements (decrease, increase, no difference), public self-consciousness (high vs. low), and ability to control behavior (high vs. low) as variables.5 The model showed two significant effects, namely a Hand Movements X Public Self-Consciousness effect, X2(2, N=56) = 12.29, p < .01, and a Hand Movements X Ability to Control Behavior effect, X2(2, N=56) = 6.40, p < .05. The final model included these two effects (goodness-of-fit: X2(3, 56) = 3.37, p = .33). Tables 1 and 2 give the frequency distributions concerning both effects. Table 1 reveals that a majority of people with high public self-consciousness made fewer hand movements during deception than truth-telling. This supports Hypothesis 1. Table 1 further shows that many people with high public self-consciousness managed to make the same number of hand movements during deception and truth-telling (they did not make any hand movements at all during both interviews). Moreover, Table 1 shows that only a minority of people with high public self-consciousness made more hand movements during deception than truth-telling. The results for people with low public self-consciousness are less clear. Table 1 shows that 48% of them made more movements during deception, whereas 44% made fewer movements during deception. Table 2 shows that people with high ability to control their behavior also made fewer hand movements during deception than truth-telling. This supports Hypothesis 2. Moreover, Table 2 shows that only a minority of people with high ability to control their behavior made more hand movements during deception than truth-telling. Again, the results for people with low ability to control their behavior are less clear. Table 2 shows that 39% of them made more movements during deception, whereas 43% made fewer movements during deception. The Hand Movements x Public Self-Consciousness x Ability to Control Behavior effect was not significant, X2(2, N=56) = 2.44, p = .29, meaning that the two personality factors had an additive effect on hand movement patterns. Nevertheless it is interesting to look at the magnitudes of the differences when both personality factors are taken into account. Table 3 is therefore shown for exploratory purposes. Firstly, Table 3 makes clear that showing the same number of hand movements during both interviews (i.e., refraining from making any hand movements at all during both interviews) is a behavioral pattern typically pursued by people with high public self-consciousness, and not so much by those with an ability to control their movements. Secondly, there was a difference in behavioral patterns between people who are low in both public self-consciousness and ability to control behavior and people who are high in both public self-consciousness and ability to control behavior. For instance, 75% of the first group and only 6% of the latter group showed an increase in hand movements during deception. The difference in frequency distribution between these two groups was highly significant, X2(2, N=29) = 18.28, p < .01. The frequency distributions of the "low PSC-high ACB" and "high PSC-low ACB" groups did not differ significantly, X2(2, N=27) = .65, ns. Discussion To our knowledge no prior study of how behavior differs in honest and deceptive interactions has examined individual differences in hand movements during deception. This article addressed the influence of two personality traits on deceptive behavior, namely public self-consciousness and ability to control behavior. The findings revealed that both traits were important in deceptive behavior. People who were high in public self-consciousness made fewer hand movements during deception compared to truth-telling, and people who had a high ability to control their behavior made fewer hand movements during deception compared to truth-telling. One explanation for why people high in public self-consciousness make fewer hand movements during deception compared to truth-telling than their counterparts is that they experience much cognitive load during deception due to the fact that they not only have to deceive the other but also have to think how to make a credible impression. An alternative explanation is that they realize that observers will pay attention to these movements in order to catch a lie. By refraining from making these movements they try to make the lie detection task for observers a more difficult one (the findings show that many of them try to avoid making any hand movements at all). This suggestion has an important implication. It is often suggested that a decrease in hand movements during deception happens involuntarily (both the control and the cognitive approach assume that the decrease in hand movements occurs involuntarily). It may, however, be the case that some people show such a decrease on a voluntary basis in order to fool the observer. Our data do not enable us to determine which of these explanations is the most plausible one to explain the outcomes. Therefore, further research is needed concerning this issue. Anticipating such a study, one may argue that the cognitive load explanation seems to be the least plausible solution, as Kashy and DePaulo's (1996) study about individual differences in lie-telling in everyday situations revealed that people high in public self-consciousness do frequently tell lies. We may assume that this experience will make telling lies easier and generally not very demanding for them. [IMAGE TABLE] Captioned as: TABLE 1 [IMAGE TABLE] Captioned as: TABLE 2 [IMAGE TABLE] Captioned as: TABLE 3 A plausible explanation as to why relatively large numbers of people with a low ability to control their behavior showed more movements during deception than during truth-telling than their counterparts is that they did not successfully manage to suppress their nervous movements during deception. In order to validate this explanation, we calculated correlations between making hand movements during deception and self-ratings of stress during deception for both people with a low and for people with a high ability to control their behavior. The findings showed, as suggested, a positive correlation between making hand movements during deception and being stressed during deception for people with a low ability to control their behavior r(28) = .32, p < .05, whereas such a relationship did not exist for people with a high ability to control their behavior, r(28) = .06, ns. The findings revealed that 25% of the participants who were low in both public self-consciousness and ability to control behavior showed a decrease in hand movements during deception. This findings is perhaps surprising, as theoretically these people did not seem to bother about how to make a credible impression and didn't believe that they were able to control their behavior. We might therefore expect an increase in these participants' hand movements during deception (due to the fact that they were nervous during deception and therefore may have displayed nervous behavior). Despite the previous paragraph we believe that these people accidently showed a decrease in hand movements, perhaps due to cognitive load. It could be that they found it difficult to fabricate a lie, and therefore used to measure ability to control behavior. It may be argued that should also someone who scores low on the "ability to control behavior" scale not only perceives himself/herself as bad do not controlling behavior, but should also experience much cognitive load during deception, because we may assume that these people do not frequently deceive and do not perceive deception to be a routine approach to resolving dilemmas presented in communicative contexts.6 In line with this reasoning is Kashy and DePaulo's (1996) recent finding that people low in "other-directedness" (part of our ability to control behavior scale) do not frequently deceive other people. We are aware that our findings are obtained in an experimental situation which may well differ from real-life situations. What is the ecological validity of our findings, or in other words, what do these results say about daily life situations? It has been frequently argued that behavioral differences between truth-tellers and liars can only be expected if liars are stressed (as a result of guilt or fear) and/or if some mental effort is required to formulate the lie. If both stress and mental effort are absent, behavioral cues to deception will probably not occur (DePaulo, Kashy, Kirkendol, Wyer, & Epstein, 1996; Ekman, 1985; Vrij, 1997). DePaulo et al. (1996) recently showed that the majority of lies told in daily life situations are little lies, that is, they are unarousing and do not require much mental effort. As a result, they will probably not be associated with behavioral cues. Our research paradigm differs from this situation in the sense that our subjects experienced some stress when they were lying. Of course, this happens in daily life situations as well; not all everyday lies are unarousing, as some lies do result in some level of stress, while other lies are even more serious and may therefore result in high levels of stress. We believe that our research paradigm provides insight into how people behave in situations in which some stress is involved, such as situations in which the liar feels some delight about being able to fool somebody else or situations in which discovering the lie will result in some (minor) negative consequences (unflattering situation for the liar, hurting somebody elses' feelings and therefore creating a somewhat awkward situation in the interaction with the other, some form of minor punishment, and so on). Recently, researchers have started to criticize the most popular research paradigm used to study deceptive communication. They have argued that more attention is needed concerning differences in deceptive behavior between lies with high or low stakes for being caught (Ekman et al., 1991), and differences in deceptive behavior between different types of lies, that is, falsification, equivocation, and concealment (Buller, Burgoon, White, & Ebesu, 1994). It has also been criticized that the role of the interviewer is usually too limited, that is, a more active role for the interviewer, in which he/she is allowed to interact with the potential deceiver, is needed (Kalbfleisch, 1994; Patterson, 1995). Considering our findings we would suggest that more attention is also needed regarding individual differences among deceivers in deception research. NOTES 1. A "substantial" difference means a difference between the two interviews of more than twice the standard error of measurement. 2. Unlike Ekman et al. (1991) we classified all differences (even the smallest ones) as differences. These findings and Ekman et al.'s (1991 ) findings show almost the same ratio (6: 4) of people who decrease or increase their movements during deception when the "no difference" categories are disregarded. 3. The "ability to control behavior" construct is related to the "self-monitoring" construct (Snyder, 1974), according to the definition of self-monitoring given by Briggs, Cheek, and Buss (1980, p. 6): "The prototypic high self-monitoring individual . . . is particularly sensitive to the expression and self-presentation of relevant others in social situations and uses these cues as guidelines for self-monitoring (that is, regulating and controlling) his or her own verbal and nonverbal self-presentation." According to Briggs et al. (1980), the SelfMonitoring Scale is composed of three factors, namely Acting, Extraversion, and OtherDirectedness. We included Acting and Other-Directedness (Briggs et al., 1980) in our ability to control construct. Instead of Extraversion (the third factor of the self-monitoring scale) we included Riggio's (1986) "control" measures to complete the ability to control behavior scale. We did this because we believed, after reading the control and extraversion items, that the control items were more related to what we were looking for than the extraversion items. See the Method section for a full description of the ability to control behavior scale we used. 4. Aggregating behaviors across the entire interaction might be considered a potential limitation because differences in nonverbal behavior during the course of the interaction cannot be analyzed. To gain insight into these differences requires a segmentation analysis. We realize that we lose some information by dichotomizing the personality variables and trichotomizing the hand movements variable. However, we believe that this is the best way to test our hypothesis. At first sight, a perhaps neater version to test the hypothesis would be to use the difference between truth and lie for each person as the dependent variable. The problem with such an analysis is, however, that it does not take into account the number of people that showed an increase or decrease in hand movements during deception (the relevant information in this context) but the extent of increase or decrease in hand movements these people showed during deception, which is a different and in this context irrelevant issue. 6. Thanks to an anonymous reviewer for this useful comment. REFERENCES Akehurst, L., Kohnken, G., Vrij, A., & Bull, R. (1996). Lay persons' and police officers' beliefs regarding deceptive behaviour. Applied Cognitive Psychology, 10, 461-471. Buller, D. B., Burgoon, J. K., White, C. H., & Ebesu, A. S. (1994). Interpersonal deception VII. Behavioral profiles of falsification, equivocation, and concealment. Journal of Language and Social Psychology, 13, 366-395. Burgoon, J. K., Kelley, D. L., Newton, D. A., & Keely-Dyreson, M. P. (1989). The nature of arousal and nonverbal indices. Human Communication Research, 16, 217-255. Briggs, S. R., Cheek, J. M., & Buss, A. H. (1980). An analysis of the Self-Monitoring Scale. Journal of Personality and Social Psychology, 38, 679-686. Davis, M., & Hadiks, D. (1995). Demeanor and credibility. Semiotica, 106, 5-54. DePaulo, B. M. (1988). Nonverbal aspects of deception. Journal of Nonverbal Behavior, 12, 153-162. DePaulo, B. M. (1992). Nonverbal behavior and self-presentation. Psychological Bulletin, 111, 233-243. DePaulo, B. M., Kashy, D. A., Kirkendol, S. E., Wyer, M. M., & Epstein, J. A. (1996). Lying in everyday life. Journal of Personality and Social Psychology, 70, 979-995. DePaulo, B. M., & Kirkendol, S. E. (1989). The motivational impairment effect in the communication of deception. In J. C. Yuille (Ed.), Credibility assessment (pp. 51-70). Dordrecht: Kluwer Academic Publishers. DePaulo, B. M., Stone, J. L., & Lassiter, G. D. (1985). Deceiving and detecting deceit. In B. R. Schenkler (Ed.), The self and social life (pp. 323-370). New York: McGraw-Hill. Ekman, P (1985). Telling lies. New York. W. W. Norton. Ekman, P (1988). Lying and nonverbal behavior: Theoretical issues and new findings. Journal of Nonverbal Behavior, 12, 163-176. Ekman, P. (1989). Why lies fail and what behaviors betray a lie. In J. C. Yuille (Ed.), Credibility Assessment (pp. 71-82). Dordrecht: Kluwer Academic Publishers. Ekman, P., & Friesen, W. V. (1969). The repertoire of nonverbal behavior: Categories, origins, usage, and coding. Semiotica, 1, 49-98. Ekman, P., & Friesen, W. V. (1972). Hand movements. Journal of Communication, 22, 353374. Ekman, P, O'Sullivan, M., Friesen, W. V., & Scherer, K. R. (1991). Face, voice, and body in detecting deceit. Journal of Nonverbal Behavior, 15, 125-135. Exline, R., Thibaut, J., Hickey, C., & Gumpert, P (1970). Visual interaction in relation to Machiavellianism and an unethical act. In P. Christie & F. Geis (Eds.), Studies in Machiavellianism (pp. 53-75). New York, NJ: Academic Press. Fenigstein, A., Scheier, M. F., & Buss, A. H. (1975). Public and private self-consciousness: Assessment and theory. Journal of Consulting and Clinical Psychology, 43, 522-527. Hofer, E., Kohnken, G., Hanewinkel, R., & Bruhn, Ch. (1992). Diagnostik und attribution von glaubwurdigkeit. Kiel: final report to the Deutsche Forschungsgemeinschaft, KO ssz/4-2. Kalbfleisch, P J. (1994). The language of detecting deceit. Journal of Language and Social Psychology, 13, 469-496. Kashy, D. A., & DePaulo, B. M. (1996). Who lies? Journal of Personality and Social Psychol ogy, 70, 1037-1052. Knapp, M. L., Hart, R. P., & Dennis, H. S. (1974). An exploration of deception as a communi cation construct. Human Communication Research, 1, 15-29. Kohnken, G. (1989). Behavioral correlates of statement credibility: Theories, paradigms and results. In H. Wegener, F. Losel, & J. Haisch (Eds.), Criminal behavior and the justice system: Psychological perspectives (pp. 271-289). New York: Springer-Verlag. Kohnken, G. (1990). Glaubwurdigkeit: Untersuchungen zu einem psychologischen konstrukt. Munchen: Psychologie Verlags Union. Kraut, R. E., & Poe, D. (1980). On the line: The deception judgments of customs inspectors and laymen. Journal of Personality and Social Psychology, 36, 380-391. O'Hair, H. D., Cody, M. J., & McLaughlin, M. L. (1981). Prepared lies, spontaneous lies, Machiavellianism, and nonverbal communication. Human Communication Research, 7, 325-339. Patterson, M. L. (1995). A parallel process model of nonverbal communication. journal of Nonverbal Behavior, 19, 3-31. Riggio, R. E. (1986). Assessment of basic social skills. Journal of Personality and Social Psychology, 51, 649-660. Riggio, R. E., & Friedman, H. S. (1983). Individual differences and cues to deception. Journal of Personality and Social Psychology, 45, 899-915. Riggio, R. E., Tucker, J., & Throckmorton, B. (1988). Social skills and deception ability. Personality and Social Psychology Bulletin, 13, 568-577. Siegman, A. W., & Reynolds, M. A. (1983). Self-monitoring and speech in feigned and un feigned lying. Journal of Personality and Social Psychology, 45, 115-128. Snyder, M. (1974). Self-monitoring of expressive behavior. Journal of Personality and Social Psychology, 30, 526-537. Vrij, A. (1993). Credibility judgments of detectives: The impact of nonverbal behavior, social skills, and physical characteristics. Journal of Social Psychology, 133, 601-611. Vrij, A. (1995). Behavioral correlates of deception in a simulated police interview. Journal of Psychology: Interdisciplinary and Applied, 129, 15-29. Vrij, A. (1996). Misverstanden tussen politie en verdachten in een gesimuleerd politieverhoor. Nederlands Tijdschrift voor de Psychologie, 51, 137-146. Vrij, A. (1997). Nonverbal communication and credibility. In A. Memon, A. Vrij, & R. Bull. Legal psychology: Accuracy and perceived credibility of suspects, victims and witnesses (work tide). New York: McGraw Hill, in press. Vrij, A., & Semin, G. R. (1996). Lie experts' beliefs about nonverbal indicators of deception. Journal of Nonverbal Behavior, 20, 65-81. Vrij, A., Semin, G. R., & Bull, R. (1996). Insight into behavior during deception. Human Honest or deceitful?: A study of persons' mental models for judging veracity Human Communication Research Thousand Oaks Dec 1997 -------------------------------------------------------------------------------- Authors: John S Seiter Volume: 24 Issue: 2 Pagination: 216-259 ISSN: 03603989 Subject Terms: Social research Cognition & reasoning Models Honesty Abstract: Seiter focused on understanding the detailed and dynamic mental models that people develop for judging veracity. He hypothesized that individual differences in such mental models would predict participants' attributions and confidence in making attributions. Copyright Sage Publications, Inc. Dec 1997 Full Text: This study focused on understanding the detailed and dynamic mental models that people develop for judging veracity. It was hypothesized that individual differences in such mental models, assessed by using Thagard's (1989) ECHO computer simulation program, would predict participants' attributions and confidence in making attributions. After watching videotapes of targets, 120 participants rated targets' veracity and their own confidence in making attributions. Half of these participants also provided "on-line" information that, in turn, was entered into ECHO. The two groups did not differ in their judgments of veracity, but the on-line group was significantly more confident. Results from ECHO and network analysis indicated not only that participants' mental models for detecting deception are detailed, changing, and idiosyncratic, varying in their structure and degree of coherence, but also that a number of previously unidentified cognitive structures are used for detecting deception. Results that confirm the hypotheses are also presented. How do we decide that we are being lied to? Although considerable research has examined the degree to which factors such as suspiciousness (e.g., Burgoon, Buller, Ebesu, & Rockwell,1994; Toris & DePaulo, 1985), cognitive biases (e.g., Zuckerman, Fischer, Osmun, Winkler, & Wolfson, 1987), and familiarity with a deceptive source (e.g., Brandt, Miller, & Hocking, 1980; Burgoon et al., 1994; Comadena, 1982; McCornack & Parks, 1985; O'Sullivan, Ekman, & Friesen, 1988; Seiter & Wiseman, 1995) mediate the accuracy with which individuals can detect deception, only a handful of studies have dealt specifically with the ways in which people judge deception, regardless of how accurate the judgments are (e.g., Bond, Kahler, & Paolicelli, 1985; Gordon, Baxter, Rozelle, & Druckman,1987; Hale & Stiff,1990; Kraut,1978; Kraut & Poe,1980; Maier & Janzen, 1967; Riggio & Friedman,1983; Riggio, Tucker, & Widaman, 1987; Stiff, Hale, Garlick, & Rogan, 1990; Stiff & Miller, 1984, 1986; Streeter, Krauss, Geller, Olson, & Apple,1977; Zuckerman, Koestner, & Driver, 1981). In most instances, such research has examined the verbal and nonverbal cues that people rely on when making attributions about another individual's veracity. Although the results of these studies show some general consistencies (i.e., most indicate that people expect liars to exhibit less eye contact, more postural shifts, slow speech, longer response latencies, and nervous behaviors), they present a somewhat inconsistent picture of many of the behaviors that people look for when detecting deception. For example, whereas Riggio and Friedman (1983) found that liars were perceived as smiling less, Stiff and Miller (1986) and Gordon et al. (1987) found that perceptions of deception were positively associated with increased smiling. Moreover, whereas Stiff and Miller (1986), Gordon et al. (1987), and Kraut and Poe (1980) reported that increases in self-touching (i.e., selfadaptors and self-grooming) were associated with attributions of deception, Riggio and Friedman (1983) found just the opposite (i.e., self-touching was not associated with perceptions of deception). It is clear, then, that the results of studies examining cues that are used to detect deception are not without their inconsistencies. Of course, these inconsistencies could stem from any number of factors such as the tendency of researchers to make faulty assumptions about factors (e.g., arousal) underlying the enactment of deception (see Fiedler & Walka, 1993), the use of different methodologies, different subject pools, and so forth. On the other hand, these inconsistencies may suggest that these cues are not "absolute" indicators of deception. Rather, their interpretation may depend on the particular configurational array within which they are embedded. Whatever the case, it is clear that previous research tells us very little about the process by which judgments of veracity are made. How do people integrate a vast array of nonverbal and verbal behavioral information, past knowledge, and inference to conclude that another individual is lying or telling the truth? How might people make sense of cues that may be multiple and possibly contradictory? How might their perceptions change as they receive more information? For instance, in the courtroom, a juror might initially perceive a defendant as an honest person of high integrity. However, after considering the specificity of testimony against the defendant, a juror might change his or her mind, concluding that the defendant is a liar. Ultimately, however, the juror's mind could be changed again by a number of factors such as the intensity of the defendant's denials. In short, because of a number of different information sources, conclusions about a person's veracity may change over time, resulting in different judgments at different times. This study argues that recent work by Thagard (1989) and Miller and Read (1991), which provides a "connectionist," cognitive science conceptual framework and methodology, offers a promising approach for addressing such issues. Using this connectionist approach, the general purpose of this study is to explore, more dynamically and holistically, individuals' rich and detailed "mental models" or constructions of events that lead to inferences about the deceptiveness or truthfulness of a communicator. As will be discussed in more detail later, the work of Thagard (1989) and Miller and Read (1991) provides a model of the ways in which individuals develop coherent mental representations of others to explain or make attributions about others` behavior. As such, it will be argued that such a model might be useful for understanding and predicting the process and outcomes of deception detection. Specifically, guided by this model, this study both qualitatively and quantitatively examines (a) how individuals construct mental representations of others' veracity and (b) the degree to which the coherence of an individual's mental representation of others is related to outcomes such as the attributions he or she makes, and his or her confidence in making such attributions. The following section discusses prior literature to illustrate how a cognitive framework can be useful for understanding such issues. REVIEW OF LITERATURE A Cognitive Framework for Understanding Deception Detection Although little research has addressed the ways in which judgments of deception are related to cognitions, two notable exceptions are represented in the works of Fiedler and Walka (1993) and Buller and Walther (1989). Fiedler and Walka argued that, in general, people lack the knowledge needed to use nonverbal cues that discriminate honest from deceptive communication so that, when detecting deception, they rely on general heuristics that may not be reliable for making such discriminations. Similarly, relying on the work of Rumelhart (1984), Buller and Walther (1989) argued that two types of cognitive processes can mediate the act of deception detection: "bottom-up" and "top-down" processing. Bottom-up processing occurs when cognitive schemata (i.e., hierarchicall;, organized mental structures containing knowledge about the social and physical world that guide perception, interpretation, storage, and recall of information) at low levels of abstraction are activated by a particular datum and then activate higher level schemata to account for that datum (Buller & Walther, 1989). In contrast, top-down processing occurs when a higher order schema is activated first, and, in turn, lower level schemata are activated to search for more data (Buller & Walther, 1989; Rumelhart, 1984). According to Buller and Walther (1989), previous research in the area of deception detection can be seen as taking either a bottom-up or topdown approach. For example, research that examines the specific nonverbal behaviors that people use to detect deception has taken a bottom-up approach because it assumes that specific behavioral changes (e.g., reduced gaze, longer response latencies, and more speech errors) activate attributions of deceit from those attempting to detect deceit. On the other hand, a considerable amount of deception research has taken a top-down approach, including research that has examined subjects' stereotypes about deceptive behavior (see above). According to Buller and Walther (1989), these stereotypes amount to a "deception script" containing the behavioral sequences that people expect deceivers to encode. Communicators, they argue, enter some conversations with these scripts "preactivated" and the deception scripts, in turn, guide attention to subsequent behavior in a top-down fashion.' Whether deception detection is based on a top-down or bottom-up process (or both), it appears clear that we need to better understand the detailed connections between the "behaviors" that perceivers observe and the conceptual structures that perceivers activate in evaluating the deceptiveness of a target individual. On the basis of the work on connectionist modeling in social cognition and cognitive science (Kintsch, 1988; Rumelhart & McClelland, 1986), Thagard (1989) and Miller and Read (1991) provide a useful, detailed framework for elaborating and exploring such connections among behavior and inferences. The general purpose of Thagard (1989) and Miller and Read's (1991) work is to provide an account of the process by which individuals form coherent mental representations of others to explain, or make attributions about, others' behavior. In developing these coherent mental models or pictures of others, Miller and Read (1991) note that people rely on cognitive knowledge structures (which include goals, plans, scripts, roles, and themes) to make inferences from others' behavior and to integrate these inferences into a meaningful picture. According to Miller and Read (1991), this process of developing a coherent model includes two steps. First, input (e.g., the observation of behavior, communication, and events) leads to the activation of mental concepts that are related to the input. For example, if we see a man talking to a woman in a bar, it might activate a goal-based concept such as "he's trying to pick someone up," or if we see someone with a certain skin color, it might activate stereotypes about race (Miller & Read, 1991). In other words, concepts at a more general and higher level of abstraction (he has the goal of picking someone up) are activated because they can be used to explain input at a lower level of abstraction (man in bar/talking to woman). Miller and Read (1991) argued that there are three ways in which any two concepts may be linked. First, when connected by a positive or excitatory link, activation of one concept increases the activation of the other concept. Second, when connected by a negative or inhibitory link, activation of one concept decreases the activation of another. And third, there may be no link between two concepts and therefore no activation or deactivation between them. From the discussion so far, it is clear that activated concepts may be large in number and possibly contradictory. Indeed, Kintsch (1988) and Read (1992) noted that concepts may initially be activated indiscriminantly, with little attention being paid to their consistency with other activated concepts. For instance, a man talking to a woman in a bar may be explained by a goal-based concept such as "he's trying to pick someone up," but also with the contradictory knowledge that the man is happily married to someone else but friendly. Thus, many of the activated concepts may be likened to hypotheses or inferences competing to explain input (see McClelland, Rumelhart, & Hinton, 1986). Those concepts that receive the most activation are used to make sense of the social world. How, then, do some concepts receive more activation than others? According to Miller and Read (1991), once an initial loose network of connected concepts is developed, the second step of building a coherent mental model involves spreading activation throughout the network of concepts. In this step, activation is simultaneously dispersed through the links and concepts in the mental network, resulting in differing levels of activation for each of the concepts. The theoretical rules by which the coherence of a network is determined were specified by Thagard (1989) in his theory of explanatory coherence. The principles are as follows: First, the principle of symmetry argues that if one concept (P) coheres with another (Q), then the latter (Q) also coheres with the former (P). Thus, in terms of the cognitive links discussed earlier, symmetry produces a symmetric excitatory link if two concepts cohere and a symmetric inhibitory link if two concepts incohere. Second, the principle of breadth refers to the fact that a concept (e.g., "he cheated on the exam") receives more activation than its competitors (e.g., "he did not cheat") if it explains more facts (e.g., he received a good grade, he did not know the subject material later, he was looking at other students' exams) than its competitors. The third principle involves being explained and argues that explanations receive more activation if they are explained by further explanations (e.g., "he cheated" is explained by "he did not study,' and "he did not study," and "he did not study" is explained by "he is lazy") than if they are not. Fourth, the principle of simplicity argues that a concept receives more activation when it requires fewer assumptions (e.g., "he received a good grade, did not know the subject material, and looked at other students' exams because he cheated" requires fewer assumptions than "he received a good grade, did not know the subject material, and looked at other students' exams because he was lucky, nervous, and forgetful"). Fifth, the principle of analogy argues that a concept is more coherent if its support is analogous to support for a similar concept (e.g., "he cheated on other exams"). Sixth, the principle of data priority argues that a concept describing results of an observation receives more activation than a concept whose sole justification is what it explains. Finally, the principle of contradiction argues that when two concepts contradict (e.g., "he cheated" versus "he did not cheat"), they do not cohere. In addition to specifying these principles, Thagard's (1989) theory assumes that concepts are not activated in isolation. Instead, Thagard's theory argues that the principles are applied at the same time in such a way that activation is simultaneously dispersed through the links and concepts in a network. As Read (1992) noted, This is an example of a parallel constraint satisfaction process that is a fundamental part of recent work on connectionist modeling or parallel distributed processing. Such a process evaluates, in parallel and simultaneously, the extent to which concepts in the network are consistent with and supported by other concepts in the network. This is in contrast to a serial process where each concept would be evaluated, one at a time. (p. 12) Thus, the framework provided by Thagard (1989) and Miller and Read (1991) assumes that at any given time, there may be multiple interpretations of the same behaviors. The theory shows how strong concepts in a network win out over weaker ones. In general, a concept is highly activated if it is connected with a greater number of excitatory links, whereas a concept has a low level of activation if it is connected with a greater number of inhibitory links or is connected to relatively few concepts. Concepts that are not supported by other concepts in a network fade away. On the other hand, concepts that are supported are strengthened and, as assumed by the theory, are used to make sense of social interactions and other people. In other words, we use highly activated concepts to make attributions about other people. The next section discusses the ways in which this framework informs the study of deception. RESEARCH QUESTIONS AND HYPOTHESES It was noted earlier that previous deception research has not addressed issues such as the ways in which people construct meaningful attributions about a person's veracity on the basis of multiple communication sources and factors. It is argued here, however, that the work of Thagard (1989) and Miller and Read (1991) provides a model to address such issues. Specifically, in addition to allowing an integration of former research on deception, an explanatory coherence model is capable of addressing the ways in which people attempt to make sense of the information that they receive. It suggests that the process of deception detection can be understood through a qualitative examination of the mental models that individuals construct when observing the potentially deceptive communication of others. For example, an examination of these networks might promote an understanding of the ways in which people integrate information (e.g., nonverbal and verbal cues, knowledge, inferences) to reach a conclusion about someone's veracity, the ways in which people's decisions about someone's veracity may change, and the ways in which people come to terms with contradictory information such as the simultaneous observation of behaviors that are indicative of both truthfulness and deceptiveness. Thus, although it is suspected that the mental models that individuals construct will be highly idiosyncratic, one research question (RQ) is proposed as follows: RQ1: What is the nature of the mental models that people construct when attempting to reach conclusions about other individuals' veracity? In addition, the theory of explanatory coherence makes testable assumptions about the relationship between concepts that receive activation in a person's cognitive network and attributions that are made as a result of such activation. Specifically, the theory assumes that concepts that receive high levels of activation (in accordance with the principles set forth by Thagard) are used to make sense of the social world. With regard to this study, when judging veracity, individuals must choose between two competing models: Is the person lying or is the person telling the truth? If the theory of explanatory coherence is tenable, the concepts receiving the most activation (when Thagard's principles are applied) should correspond to attributions that are made in "real life." In other words, coherence theory would predict that an actual attribution of truthfulness or deceit would depend on which of these concepts received the highest level of activation in a person's cognitive network. Thus, the following hypothesis is tested in the present study: H1: Subjects will perceive an individual as truthful when their mental models of "truthfulness" (i.e., "the person I am observing is being honest") receive the highest level of activation in their cognitive networks, whereas subjects will perceive an individual as deceptive when their mental models of "dishonesty" receive the highest level of activation. Furthermore, it is argued that coherence theory provides a means of predicting how confident individuals should be concerning the attributions they make about deceptiveness. Specifically, concepts with high levels of activation are those that are connected to, or supported by, a greater amount of input or data. Thus, these concepts should be the ones that are related not only to the final attribution that an individual makes but also to the confidence with which the individual makes the attribution. Stated differently, the second hypothesis of this study is as follows: H2: Subjects with a high level of activation for a concept (e.g., "deceitfulness") should be more confident in making an attribution (e.g., of deceitfulness) than a person with a low level of activation for the concept. METHOD Overview One-hundred and twenty participants watched one of four videotapes in which a stimulus target communicated truthfully or deceptively. As the tapes proceeded, half of the participants were asked to make inferences "on-line" regarding the stimulus target's behaviors. After viewing segments of the tape, these participants made inferences about the target's veracity and rated how confident they were in making such inferences. These participants' inferences were subsequently analyzed using Thagard's (1989) ECHO program, a computer simulation that implements the seven theoretical principles of explanatory coherence discussed earlier. This analysis provided measures of participants' mental models and the degree to which deception or truthfulness were highly activated concepts (i.e., inferences) within these models. The basic design, therefore, was correlational, assessing the relationship between the activation level of concepts in individuals' mental models and the confidence with which attributions of deceptiveness were made. The remaining 60 participants were also asked to judge the target's veracity and their own confidence but did so after viewing the stimulus tapes all the way through. Thus, these participants acted as a control group, helping to determine whether stopping the tape affected confidence or veracity ratings. Development of the Stimulus Tapes Stimulus Targets The videotaped stimulus targets were one White man, age 29, and one White woman, age 31. Both were known personally to the researcher and both volunteered to participate in the study Lists of Interview Questions An examination of previous research (e.g., Cupach & Metts, 1986; Harvey, Agostinelli, & Weber, 1989; Harvey, Wells, & Alvarez, 1987) indicated that people's accounts for the dissolution of their close relationships would be a realistic and rich source of communication messages for this study's stimulus materials. An interview list that consisted of six preliminary and six probing questions concerning previous relationships and why they failed was developed and is shown in Appendix A. Room and Videotaping Equipment Two chairs, placed approximately 6 ft (1.8 m) apart, faced one another in the middle of a large living room. A video camera that filmed in color was set up behind one chair so that it focused on the top of the stimulus targets' heads and just below their knees when they were seated on the other chair. Videotaping Procedures Each of the stimulus targets participated in one "honest interview" and one "deceptive interview." Before the first, targets were told, We will be asking you several questions about a past close relationship you were in that ended. Please answer all of the questions to the best of your ability and be completely truthful. Before the deceptive interview, the targets were told, We will be asking you several questions about a past close relationship that ended but want you to be completely deceptive. That is, make up a person and lie about your relationship with him/her. Please answer all of the questions to the best of your ability. After the targets were given the above instructions, they were allowed 5 minutes to prepare for the interview and then led to the chair. Next, a White male interviewer asked the target if he or she had any questions, started the video equipment, took a seat in the chair facing the stimulus target, and proceeded to ask the target the interview questions. The list of interview questions was meant to be a guide for the interview, providing topically similar tapes across the targets and conditions. When possible, the interviewer followed up each question, probing for detailed information about reasons for the target's answers, thoughts, and feelings. Following the first interview, the target was given 10 minutes to relax and then prepared for the second interview. When each interview was completed, the targets were asked if they were indeed truthful in the "honest interview" and deceptive in the "deceptive interview." Both targets indicated that they had been honest and deceptive when instructed to do so. Viewing of Videotapes Participants The participants in this study were 120 students (48 males, 72 females; mean age = 21.9) enrolled in interpersonal and organizational communication classes at a large southwestern university. All received course credit for volunteering to participate in the study. Apparatus The four videotaped stimulus target interviews were viewed by the participants on a large color television set. The approximate times of the taped interviews were as follows: honest male student, 6 minutes and 45 seconds; deceptive male student, 10 minutes and 15 seconds; honest female student, 7 minutes; deceptive female student, 9 minutes. Procedures Half of the participants (25 male students, 35 female students) watched one of the four stimulus target videotapes. Upon arriving at the laboratory one at a time, each participant was randomly assigned to view one of the tapes. When all four tapes had been viewed, random assignment started again. All these participants were told that they would be watching a videotape of someone who was either lying or telling the truth. Moreover, they were told that the goal of the experiment was to determine what they were thinking about the person on the tape while they were watching the tape. Participants were told that they would be asked to distinguish between the inferences they made and the facts or data (if any) on which the inferences were based. Following the preliminary instructions, the researcher (a White man) played one videotaped interview for each participant, stopping the videotape at predetermined times to ask the participants questions about their perceptions of the videotaped stimulus targets. Stopping times were predetermined on the basis of the questions asked during the making of the stimulus tapes. Specifically, because six general interview questions were asked of stimulus targets, the tapes were stopped six times (i.e., once after each question was answered) during the course of the interview. Once the tapes were stopped, the participants were asked questions regarding their impressions of the stimulus targets. After stopping the tape each time, participants were asked one standard question: "What is happening here?" Typically, participants responded by making inferences about the targets' veracity (e.g., "I think she is lying"). Because each participant's mental model developed differently, further questions were not standardized or asked in a standard order but were based on participants' responses. All follow-up questions were similar, however, in that they were designed to probe for other inferences or for data that participants had observed to support inferences. Examples of such questions included the following: "Why do you think this is happening?"; "What makes you think that?"; "How is this person acting?"; and "Were you thinking anything else?" To ensure that participants were not led to observe some behaviors and not others, and make some inferences and not others, the interviewer limited follow-up questions to those that explored what concepts (data and inferences) were being activated by participants and to clarifications regarding how the concepts were related. Moreover, as each question was answered, the interviewer attempted to give verbally and nonverbally neutral responses while mapping participants' mental models. As noted earlier, Thagard's (1989) theoretical principles are based on relationships between concepts (i.e., data and inferences) in a cognitive network. Thus, to analyze each participant's mental model according to these principles, it was necessary to record the ways in which each participant perceived concepts in their mental models to be related to one another. With this in mind, a mapping procedure was developed using a large poster board with numerous rows and columns. Each time the participant made an inference and identified it as such, the inference was written along the far left side of a row and the top of a column. Furthermore, the data that participants believed were explained by the inference were written along the far left side of a row and at the top of a column. This resulted in a matrix with all of the concepts (i.e., data and inferences) generated by participants appearing in both the rows and the columns. Information in the columns indicated concepts that participants used to explain other concepts, and information in the rows indicated concepts that were explained by other concepts. When the participant made an inference (e.g., the stimulus target is lying) and believed that a particular datum was explained by that inference (e.g., the stimulus target seems anxious), a mark indicating this relationship was placed where the row and column for the inference and datum met. When the participant indicated two contradictory inferences (e.g., the stimulus target is lying and is telling the truth), this was also indicated where the row and column for the inferences met. Participants also indicated when they believed inferences or data were contradictory (an example of this procedure is shown in Appendix B). Thus, determinations regarding inferences, data, and relationships between concepts were made by participants. In addition to mapping participants' mental models, it was also necessary to assess whether they believed the stimulus target on the tape was behaving honestly or deceptively, and how confident they believed their attributions were. Participants' perceptions of stimulus subjects' veracity were assessed throughout the interview (each time the tape was stopped) by asking them (a) to indicate on a dichotomous checklist whether they thought the subjects were lying or telling the truth; and (b) to indicate on a 9-point, Likert-type instrument whether they thought the target was definitely deceptire or definitely truthful. Participants' confidence in making the dichotomous attribution was assessed with a 9-point, Likert-type rating scale (1 = extremely confident and 9 = not at all confident about their attribution).2 The remaining 60 participants (23 male students, 37 female students) also watched the videotapes but did so in groups of 15 without being interviewed and without having the tape stopped. After being randomly assigned to one tape and being given the same instructions as the interviewed participants, each group watched a tape and rated the stimulus target using the same scales described above. After rating the taped interviews, all participants were thanked and debriefed. Data Analysis The data collected from the interviewed participants were analyzed using a computer program developed by Thagard (1989). The program, called ECHO, is a simulation that simultaneously sets in motion the seven principles that Thagard (1989) argued could be used to assess the coherence of a network (see above). How are the principles implemented by ECHO? First, as the principle of symmetry suggests, the program establishes an excitatory link between concepts that cohere and an inhibitory link between concepts that "incohere." These links are symmetric, so the activation weight from Concept 1 to Concept 2 is the same as the activation weight from Concept 2 to Concept 1. Second, the principle of breadth is implemented in ECHO by summing the activation of all the concepts that are explained. Third, the principle of being explained is implemented by the program because concepts send activations to concepts they explain. Fourth, the principle of simplicity is implemented when activation sent by a concept is divided by the number of concepts that do the explaining. Thus, a single explanatory concept will receive more activation than will multiple explanatory concepts. Fifth, the principle of data priority is implemented by linking concepts that represent observed data to a special evidence unit that always has an activation of 1.0 to indicate that evidence has a special degree of strength and is more immune to deactivation than are other concepts. Sixth, the principle of analogy is implemented when ECHO produces excitatory links between the explanatory concepts contained in the analogy. Seventh, the principle of contradiction is implemented by establishing an inhibitory link between two contradictory concepts so that concepts become deactivated when they compete with more strongly activated concepts. The use of the program for this study involved specifying which concepts participants used as evidence or data, which they used as inferences, and which inferences they used to explain which data. The matrix of data presented in Appendix B has been translated into List Processing (LISP) structures in Appendix C to illustrate how the concepts (i.e., data and inferences) are entered into the program. Appendix D diagrams the matrix of data to illustrate the process more concretely. First, inferences or data in the network (i.e., the stimulus target is truthful, the stimulus target is deceptive, the stimulus target is anxious, the stimulus target's answers are specific, the stimulus target's answers are vague, the stimulus target stuttered) are coded as El, E2, E3, E4, E5, and E6, respectively. Second, it is necessary to specify which inferences explain which data/inferences. For example, E2 (the stimulus target is deceptive) and E3 (the stimulus target is anxious) explain E5 (the stimulus target's answers are vague), and El (the stimulus target is truthful) explains E4 (the stimulus target's answers are specific). In other words, the fact that the stimulus target is lying and is nervous explains why the target's answers are vague, and the fact that the stimulus target is truthful explains why the target's answers are specific. Next, it is necessary to specify contradictions in the network. For example, El (stimulus target is truthful) contradicts E2 (stimulus target is deceptive). Finally, concepts that are considered data must be specified, for they are linked to a special evidence unit that is given an activation of 1.0 to indicate that evidence has a higher degree of credibility. In this example, concepts E4, E5, and E6 are considered data units. Following the specification of links in each participant's network, the ECHO program was run. The program works by applying a parallel constraint satisfaction process in which activation is simultaneously spread throughout the network to converge on a pattern of activation that best fits the constraints imposed by the concepts and links in a network. The program runs in cycles, each corresponding to an action in the simulated social interaction. As new concepts are added to the network, the strengths between links are modified in parallel.4 The modification of concepts continues until all activation asymptotes. Ultimately, activation weights (ranging from 1 for excitatory links to -1 for inhibitory links) are assigned automatically to the concepts in a network. As the program runs, some concepts become more activated than others. A concept that receives a high level of activation will cause concepts that compete with it to become deactivated. For example, in the diagram in Appendix D, the "truthful" concept might receive a lower level of activation than the "deceptive" concept. Those concepts that receive the most activation can be interpreted as being the concepts that a person's model coheres around (Miller & Read, 1991). Thus, with regard to this study, if the concept "he is lying" receives the most activation, it can be assumed to be more acceptable than the concept "he is telling the truth." This type of assessment was made for each of the participants in the present study. Ultimately, therefore, the use of Thagard's (1989) program yielded scores indicative of the level of activation for the concepts "the target is lying" and "the target is telling the truth." These scores were used in subsequent analysis in two ways. First, the most activated concept was compared with participants' final attributions (i.e., "the target is lying" or "the target is telling the truth") to determine whether this type of analysis is useful for predicting the actual attributions that people render." Second, by subtracting each participant's activation level for truthfulness from their activation level for deceptiveness, a single score resulted that was correlated with participants' confidence scores (i.e., how confident they were when making an attribution regarding targets' veracity). The next section presents the results of these analyses. RESULTS The Nature of Mental Models Overview Coherence analysis provides a useful method for understanding the mental models that individuals construct and use to evaluate others' veracity. It not only enables us to examine the richness and detail of these networks but also demonstrates the ways in which concepts within these networks are integrated and become more or less salient for the communicator. Specifically, coherence analysis indicates quantitatively the degree to which concepts such as "the communicator is lying" or "the communicator is telling the truth" become either activated or deactivated in a network. In addition, such analysis indicates why some concepts become more activated than others. As will be discussed shortly, the content (i.e., specific observed behaviors, knowledge and inferences) in participants' mental models varied considerably from participant to participant. Thus, at one level of analysis, participants' mental models for judging veracity were highly idiosyncratic. However, at a more general level, most of Thagard's (1989) theoretical principles of explanatory coherence were operating in all of the participants' mental models (see below). Two of these participants' mental models were selected for discussion in this section because they represent most or all of Thagard's principles that were used by the rest of the sample. Figure 1, based on output from Thagard's (1989) ECHO program, depicts the mental model of a participant who thought that "R," the female stimulus target, was lying about a previous relationship she had with a man named Jack. As can be seen in the figure, concepts used in the network include information about observed verbal and nonverbal behavior, knowledge, and inference. How did the participant integrate and use such information to reach a conclusion about R's veracity? Moreover, why was it the case that the participant's concept for deceptiveness (E2 "R is lying") received a higher level of activation (.91) than did the participant's concept for truthfulness (El "R is telling the truth") (-.82)? First, recall that in the theory of explanatory coherence (and the ECHO program), concepts that explain more facts receive more activation. In this case, the concept for deceptiveness explains more data than the concept for truthfulness does; whereas "R is lying" directly accounts for eight pieces of data (indicated in the figure by single-headed arrows from E2 to E3, E6, E7, E9, ElO, E14, E15, and E16), "R is telling the truth" accounts for only two (E20 and E26). Thus, "R is lying" has more explanatory breadth. Second, Thagard's theory specifies that concepts receive more activation if they are explained by higher order concepts. In Figure 1, it can be seen that although the statement "R is lying" is explained by E17 (which, in turn, is explained by the strong subnetwork of E18 and E19 and E21 through E24) and E12 and E13, the statement "R is telling the truth" gets activation from no higher level concepts. Thus, "R is lying" is a stronger concept in the network than "R is telling the truth." Finally, the theory specifies that concepts that contradict more or stronger concepts become deactivated. Thus, in this network, because the statement "R is telling the truth" contradicts the stronger activated "R is lying" (as indicated in Figure 1 by a double-headed arrow), it receives less activation. In short, "R is telling the truth" receives little activation because it explains less, is explained by less, and contradicts the highly activated concept "R is lying." Figure 2 depicts a mental model in which the opposite is true: The participant believed that "S," the male stimulus target, was telling the truth about having a relationship with a woman named Barb. As can be seen in the figure, although the content of the concepts is much different from that in the previous network, Thagard's (1989) theoretical principles are applied in the same way, illustrating how multiple sources of information can be integrated and used in parallel to judge the stimulus target's veracity. In this case, concept El, "S is telling the truth," directly explains 15 facts (E5, E6, E9, ElO, Ell, E12, E16, E18, E21, E27, E28, E29, E31, E32, and E33) and is explained by one (E3). On the other hand, E2, "S is lying," explains only one fact (E23) and is jointly explained by E14 and E15 (indicated in the figure by arrows that join one another). In addition, the concept "S is lying" contradicts "S is telling the truth" and "S is shy" (E3), both strongly activated concepts. Finally, "S is telling the truth" receives further activation from an analogy (indicated in the figure with winding lines).6 Specifically, the participant draws an analogy between her own behavior when she lies and the behavior of the target whom she perceives as deceptive. Because the participant sounds genuine (E20) when she tells the truth (El9), she perceives S's genuine behavior (E21) as a sign of his honesty (El). [IMAGE CHART] Captioned as: Figure 1: [IMAGE CHART] Captioned as: Figure 2 The Construction of Mental Models: Content and Structure The framework advanced in this study assumes that there are causal connections between the multiple concepts that play a role in the process of deception detection. In other words, in any given mental model, concepts are activated because they explain the occurrence of other concepts. Whereas the previous section illustrated the ways in which perceptions of veracity may be affected by multiple concepts at the same time, this section examines specific substructures and content within participants' mental models. Such an analysis is useful for several reasons. First, it illustrates whether there are patterns in substructure use. That is, when detecting deception, are some substructures more common than others? Second, such an examination demonstrates how specific concepts (e.g., "she is telling the truth") are causally related to others. This is important to understand the nature of deception detection from a coherence framework because the location of a concept in the overall network affects the degree to which that concept is activated. Finally, such an analysis is useful because it helps us understand how people use specific types of information to reach conclusions about others' veracity. Indeed, in this study, a total of 1,514 concepts were generated (an average of 25.23 per participant), but not all were used in the same way For these reasons, the following sections explicate more specifically the ways in which the content (i.e., general cues and inferences) of participants' mental models is structurally related to the concepts of truthfulness and deceptiveness. Seven different substructures are identified. Substructure 1: signs of deception. One common substructure found in participants' mental models was one in which concepts of truthfulness or deceptiveness were used to directly or indirectly explain a behavior or inference. Several examples are shown in Figure 3. Such substructures are labeled "signs of deception" because, as seen in each example, the concept "S is lying" explains some behavior or inference. [IMAGE CHART] Captioned as: Figure 3: Recall that the theory of explanatory coherence assumes that concepts that have more explanatory breadth receive more activation than those that do not. Thus, concepts explaining a large number of "signs" should win out over those that do not. This is significant when one considers that "signs" were the most common substructure, used 671 times by participants. Not only did all of the participants use it to judge veracity; 53% of all the concepts generated by participants were found in this type of substructure. It is also informative to examine the general nature of concepts that were found in this substructure, for such an examination indicates what type of information people perceive as indicative of veracity. Of the concepts that were contained in this type of substructure, 34% (277) were nonverbal in nature (e.g., "S's legs were shaking"); 24% (196) described verbal behaviors (e.g., "R said she went to the movies"); and 41% (329) were inferences that were made about the targets or the targets' stories (e.g., "S is uncomfortable"). Substructure 2: constituents of deception. In addition to using signs, 39 out of the 60 participants made inferences about veracity by identifying pairs of concepts that, in their minds, constituted deceptive/truthful behavior. Examples of such "constituents of deception" are also illustrated in Figure 3. In each example, two concepts are paired to explain why the target is lying or telling the truth. In the theory of explanatory coherence, constituents receive activation, but not as much activation as a concept that explains data by itself. Interestingly, the nature of the concepts in this substructure differed from those in the previously discussed substructure in several ways. First, although nonverbal behaviors were frequently viewed as signs of veracity, no concepts about targets' nonverbal behaviors were seen as "constituents of deception/honesty." Second, when participants used targets' verbal behaviors as signs, concepts were either specific (e.g., "R said she took Jack to the movies") or general (e.g., "R's answers lacked detail") in nature. However, when viewed as constituents of deception, concepts about verbal behaviors were of a specific nature in every case but one (i.e., 98 out of 99 cases). Finally, when participants used inferences (e.g., "S is nervous") as signs, all inferences were about the targets or the targets' stories (e.g., "S is nervous" or "R's story is not believable"). On the other hand, when inferences were used as constituents, they also included assumptions about the nature of reality (e.g., "no Asians are named Barb"). As will be discussed later, such assumptions illustrate the importance of knowledge structures in the process of deception detection. An examination of the ways in which these concepts are paired indicates that it is what targets say specifically that is used to make inferences about their veracity. To be sure, of the 80 pairs of concepts that were seen as constituting deception, 76 involved at least one concept concerning something specific that the targets said. In 22 of those cases, the specific verbal statement was paired with another contradictory verbal statement. In 23 of those cases, the specific statement was paired with a contradictory assumption about the target (e.g., "R said she dated Jack, a punker" but "R is not the type of person to date a punk rocker"). And in 31 of those cases, the specific statement was paired with a belief about the nature of reality (e.g., "R said Jack, a punker, was a good dancer" but "Punk rockers can't dance"). In short, then, although people believe that deceptive acts influence both what is said and how it is said, they look for deception itself by considering what is said and whether it contradicts itself or something that is believed. Substructure 3: top-down processing. As discussed earlier, Buller and Walther (1989) argued that top-down deception detection occurs when a higher order schema is activated first and, in turn, lower level schemata are activated to search for more data. Although it could be argued that all cognitive processing in this study was top-down (because participants were told that this was a deception study), the clearest evidence for top-down processing can be seen in this third substructure, where a single concept, often demonstrating the operation of some stereotype or bias, was used to explain why the target was either lying or telling the truth (e.g., "R is telling the truth" was explained by "R is a female"). Such concepts affect deception detection because they presumably influence the way in which subsequent data are processed. Moreover, such concepts are significant because, according to Thagard's (1989) principles, truthful and deceptive concepts receive activation when they are explained by higher order concepts (see above). With 29 participants using it, this substructure was also common. Regarding content, 39 of these substructures included inferences about the targets (e.g., "R is shy") to explain veracity, 8 used something the targets said (e.g., "R said she was hurt in the relationship") to explain veracity, and 3 used some other form of data ("This is a study on deception") to explain veracity. Substructure 4: analogies. Thagard (1989) argued that concepts receive activation when they are supported by an analogy, the fourth substructure (also illustrated in Figure 3). Even so, with only 15 participants using them (16 analogies were generated in all), analogies were the least common substructure in this study It is interesting to note that participants used themselves as a reference point (e.g., "When I lie, I act like the target") in 14 of the analogies and used people in general (e.g., "When people lie, they act like the target is acting") in only 2 of the analogies. Substructure 5: cohypotheses. Substructures 5 and 2 are similar because, in them, two concepts are used together to explain a third (see Figure 3). The difference is that in Substructure 5, one of the explaining hypotheses is the truthful or deceptive concept (i.e., "S is lying" or "R is telling the truth"). The implication is that in this substructure, the deceptive/truthful concepts receive less activation than if they explain another concept by themselves (see Thagard's principles discussed earlier). With 23 participants using it a total of 33 times, this substructure was fairly common. In every case, the truthfulness/deceptiveness concepts were paired with inferences either about the nature of reality (e.g., "Lying causes you to slip up," which occurred in 9 cases) or about the target or the target's story (e.g., "R is projecting," which occurred in 24 cases). Substructure 6: explaining away. According to the theory of explanatory coherence, concepts become deactivated when they contradict highly activated concepts. Thus, concepts with many contradictions will become deactivated and will not be used to judge veracity. Such contradictions are perhaps the most intriguing substructure, for they indicate the ways in which individuals "explain away" discrepant information. For instance, one participant believed that the target was telling the truth but was bothered that the target also seemed nervous-a behavior that, for the participant, was indicative of deception. To overcome the discrepancy, the participant generated an alternative (contradictory) explanation for the nervous behavior, "the target is being videotaped." Thus, the participant was still able to see the target as truthful while, at the same time, account for the discrepant behavior. Forty participants generated 74 such alternative concepts. Substructure 7: isolates. The last type of substructure included concepts that were not structurally related to the truthful/deceptive concepts in participants' mental models. For example, a participant might have said "R was stupid" while not seeing R's stupidity as related to R's veracity. Such concepts, because not related to deception, did not affect the activation of truthful and deceptive concepts in participants' mental models. Twenty-six of the participants generated 66 of such isolated concepts. This and information regarding the other substructures is summarized in Table 1. Change Over Time The process of person perception is a dynamic one. As individuals are exposed to more information, their perceptions change to assimilate, to accommodate, or to explain away that information. The findings from this study indicate that the process of detecting deception is no exception: The mental models of participants in this study grew and changed over time. After making an initial attribution (i.e., the stimulus target is lying or the stimulus target is telling the truth), participants had opportunities to change their attributions five times. Results indicated that 31 participants changed their mind at least once during the interview. Fifteen of those changed their mind twice, and 3 changed their mind three times. Changes from "lie" to "truth" or vice versa occurred at about the same frequency. [IMAGE TABLE] Captioned as: TABLE 1 To illustrate such change, Figure 4 diagrams the associative network of a participant when she was first asked to make an attribution and when she was last asked to make an attribution. The dark rectangles indicate which concepts were generated at Time 1, and the lighter rectangles indicate those that were generated at Time 6. Activation levels for both times are provided as well. As can be seen, the participant initially believed that the target was being deceptive but eventually changed her mind. What happened to cause this change? At Time 1, concept E1, "S is telling the truth," was less activated than E2, "S is lying," for several reasons. First, it explained less data (i.e., E2 explained E3 and E4, whereas El explained nothing); second, unlike E2, it was not explained by higher order concepts (E6 and E7; and finally, it contradicted E2, a highly activated concept. By Time 6, however, several changes occurred to reverse these activation levels. First, E1 now explains more concepts (E8 through E14) than E2, which explains no more than it did before. Second, E2 contradicts the now heavily activated E1. And finally, to account for E3 ("S is fidgety") and E4 ("S can't remember things that happened"), which are consistent with the belief that "S is lying" and inconsistent with the now highly activated "S is telling the truth," two alternative explanations, E15 and E16, are added to the network. Now, the once deceptive behavior is explained away, resulting in less activation for E2. In short, the participant comes to believe that the target is no longer behaving deceptively. The Idiosyncratic Nature of Deception Detection The previous discussion of substructures illustrates that at one level, it may be possible to make generalizations about the process of judging veracity. Indeed, when concepts are organized into substructures, it is clear that at least some of the substructures (e.g., signs of deception) were used by nearly all of the participants. On the other hand, an analysis that makes its focus either narrower or broader indicates that, to a large extent, deception detection is idiosyncratic. [IMAGE CHART] Captioned as: Figure 4: First, when one narrows the analysis to single concepts within a mental model, it becomes clear that the information different participants used to detect deception was highly variable and idiosyncratic. This was most apparent in the different knowledge structures that participants activated. For example, one participant noted that "Asian women want father figures" and assumed that the stimulus target was lying because "he said his Asian girlfriend wanted a sensitive guy" Another believed that "one hour was not a long time to wait for someone" and assumed that the stimulus target was lying because "she was upset about waiting an hour for her boyfriend." Idiosyncrasies were also apparent in other substructures. Examples include participants who believed targets were lying or truthful because targets were "shy" or "nice-looking" (top-down processing) and participants who disagreed about whether smiling was a sign of deception or truthfulness.s Second, when one broadens the analysis to an examination of entire mental models, it becomes clear that the participants in this study showed considerable variation in the coherence (i.e., how well concepts "fit together" to form a whole) and structure of the models they used to judge veracity. For example, Figures 5 and 6 illustrate the mental models of two participants, one more coherent than the other. As can be seen, the network in Figure 5 is less fragmented (i.e., the concepts are more interconnected) than the network in Figure 6. The network in Figure 5 coheres around the concept "S is lying" (E2), which is not only supported by two strong clusters of reasons (E26 through E29 and E6, E10, and Ell), but also supports a large number of lower order concepts. On the other hand, the network in Figure 6 coheres around the concept El, "R is telling the truth," but El is not as central to its network as E2 was in the previous example. Instead, in Figure 6, there are several fragmented inferences, some that contradict (i.e., El and E2) and others (i.e., E3 and E13) that support data but are relatively isolated from the rest of the network. A comparison of Figures 2 and 4 illustrates the considerable degree to which the organization and use of cognitive substructures vary from participant to participant. Such differences are even more dramatic when one considers that both mental models were constructed while watching the same video (i.e., the male stimulus target was lying), and that both participants ultimately reached the same conclusion about the target's veracity (i.e., the target was telling the truth). As can be seen, compared with the model in Figure 4, the one in Figure 2 includes more signs, more isolates, and an analogy. Although both models include constituents of deception and contradictions, the content of these substructures varies considerably. [IMAGE CHART] Captioned as: Figure 5: [IMAGE CHART] Captioned as: Figure 6: Results of Tested Hypotheses Preliminary Tests Before testing the hypotheses of this study, statistics were run to determine whether the dependent variables (attributions and confidence) were affected by the nature of the stimulus tapes (i.e., male or female stimulus target lying or telling the truth) and the condition within which the tape was viewed (i.e., with or without stopping the tape to assess participants' attributions).9 After Bartlett's test of sphericity and tests for multicollinearity among the dependent measures indicated it was warranted, a multivariate analysis of variance (MANOVA) was computed to determine whether viewing condition, target sex, or target veracity affected participants' continuous attributions of targets' veracity (i.e., the 9-point Likert-type scale where 1 = definitely deceptive and 9 = definitely truthful) or the level of confidence with which such attributions were made (1 = extremely confident, 9 = not at all confident). With the use of Wilks's criterion, the combined dependent variables were significantly affected by the viewing condition, lamda = ..89, F(2, 111) = 6.90, p < .001, eta ^sup2^ = .11; and the sex of the target, lamda = .94, F(2, 111) = 3.51, p < .05, eta ^sup2^ = .06; but not by the targets' actual veracity, lamda = .99, F(2, 111) = .33, p > .05; or by interactions between target sex, target veracity, and viewing condition, lamda = .99, F(2, 111) = .24, p > .05; target sex and viewing condition, lamda = .97, F (2,111) = 1.49, p > .05; target sex and target veracity, lamda = .97, F (2,111) = 1.49, p > .05; and target veracity and viewing condition, lamda = .99, F (2,111) = .39, p > .05. Follow-up univariate tests were conducted to examine more closely the effects of the viewing condition and target sex on the dependent variables. Results indicated that although participants in the control group and the interviewed group did not differ significantly in their attributions of the targets, F(1, 112) = 2.52, p > .05, the control group was significantly less confident in making its attributions, F(1, 112) = 13.27, p < .01, eta ^sup2^ = .11). Moreover, although participants were not more confident when judging the male target than when judging the female target, F(1, 112) = .74, p > .05, the female target was perceived as significantly more deceptive than was the male target, F(1, 112)] = 6.99, p < .01, eta = .06. In short, these results indicate that caution should be taken when making generalizations about confidence across viewing conditions and about attributions across stimulus targets. Hypothesis 1: Predicting Attributions of Deceptiveness As discussed previously, the most activated concept in an individual's mental model should predict the attribution that he or she makes. To test this hypothesis, a phi coefficient was computed to examine the relationship between the attribution a participant made (i.e., "the target is lying" or "the target is telling the truth") and the most activated concept in that participant's simulated mental model (i.e., "the target is lying" or "the target is telling the truth"). Results indicated a significant relationship (phi = .86, chi ^sup2^ = 40.27, df = 1, p < .001), thereby supporting the first hypothesis of this study Hypothesis 2: Predicting Confidence For each concept in a participant's mental model, the ECHO simulation produced activation levels. An independent variable was created by computing the difference between the activation levels of the deceptive and truthful concepts in each mental model (i.e., the activation level of "the target is telling the truth" was subtracted from the activation level of "the target is lying"). It was hypothesized that the size of this difference score should predict the confidence with which an attribution is made. That is, smaller differences should lead to less confidence and larger differences should predict more confidence. To test this hypothesis, regression analysis was used. Results, however, did not support the hypothesis, R = .01, F(1, 58) = .85, p > .05. To investigate possible reasons for this lack of support, further analyses were conducted. A significant negative correlation was found to exist between the activation level of the participants' "truthful" and "deceptive" concepts (r = -.88, p < .001). This resulted in difference scores with little variability. Moreover, as indicated by the high negative correlation, any variation in these concepts' difference scores could be primarily due to chance. Thus, variability due to error, which presumably would not be systematically related to the confidence variable, may have been the actual reason for the lack of support for the hypothesis. Because of this statistical problem, it was decided that Hypothesis 2 should be examined by using the activation score of a single concept instead of the difference between the activation levels of two concepts to predict attributional confidence. Specifically, it was suspected that the activation level of the most positively activated concept (i.e., either "the target is telling the truth" or "the target is lying") in a participant's mental model would predict that person's degree of attributional confidence (i.e., higher activation should lead to higher confidence, whereas lower activation should lead to lower confidence).lo To test this, regression analysis was used. Results indicated a significant relationship, R = .39, F(1, 58) = 10.58, p < .002, R ^sup2^ = .15, thereby supporting this study's second hypothesis. DISCUSSION This study focused on understanding the detailed and dynamic mental models that people develop for the purpose of judging other individuals' veracity. To that extent, this study has reached beyond previous research to provide new insights into the process of deception detection. By building on the work of Thagard (1989) and Miller and Read (1991), and by implementing a novel design and analysis that mapped and measured participants' mental models, this study not only revealed that a "connectionist," cognitive science framework provides a unique way of understanding how people integrate large amounts of verbal and nonverbal information, inference, and knowledge for the purpose of detecting deception; it also showed that coherence analysis is useful for predicting outcomes of the detection process. Indeed, both of this study's hypotheses were confirmed. First, tests of Hypothesis 1 indicated that the activation of concepts in a person's simulated mental model predicted the type of attribution that a person made in "real life." In other words, people decided that someone was lying or telling the truth depending on which of these two concepts was the most activated in their network. Support for this hypothesis is significant because it suggests that when considered together, Thagard's (1989) seven theoretical principles of explanatory coherence provide a tenable framework for describing the process by which attributions of veracity are made. Specifically, because activations were determined through an implementation of these theoretical principles, and because activations corresponded to the attributions made by participants, it is apparent that the theory of explanatory coherence is useful for both describing and predicting the process and outcomes of deception detection. In short, this study suggests that deception detection can be conceptualized as a process by which multiple concepts competing within the same mental model become more or less activated according to a small set of theoretical principles of coherence. Concepts receiving the most activation are used to attribute honesty or dishonesty. Second, tests of Hypothesis 2 indicated that the activation of concepts in a network (i.e., the most activated concept in a network) predicted the confidence an individual had when making an attribution. This, of course, makes sense because concepts that receive higher levels of activation are connected to, or supported by, a greater amount of input or data, which presumably would lead an individual to be more confident about his or her attribution. As with Hypothesis 1, support for this hypothesis illustrates that coherence theory and analysis are useful for predicting the outcomes of deception. In addition to supporting these hypotheses, this study promotes theoretical understanding of cognition's role in the process of deception detection in other ways. First, this study shows how information regarding deception is cognitively represented and, by doing so, suggests that deception detection is far more intricate than previously conceptualized. For instance, interpersonal deception theory (Buller & Burgoon, 1996; Burgoon, 1994) asserts that deceptive acts contain many parts, including the deceptive message itself, ancillary behaviors intended to bolster credibility and image, and unintended behaviors that leak deceptive intent. The framework presented in this study illustrates the ways in which all of these information sources and much more can be integrated to judge deception. Indeed, in addition to verbal and nonverbal information, this study showed the importance of knowledge and inference in forming impressions of deceit. A primary contribution, therefore, is an understanding of the ways in which cognitive networks operate holistically, processing multiple and possibly contradictory pieces of information simultaneously.11 One advantage of such an approach is that a concept or explanation (e.g., he is lying), which may have been perceived as coherent when evaluated in isolation, may be seen as incoherent when evaluated comparatively. In addition, such an approach allows us to see how multiple explanations of the same behavior may be active in a mental model at the same time and how some of these explanations win out over others (see Read, 1992). Finally, an implication of this complexity is that detecting deception can be cognitively demanding, requiring detectors to integrate large amounts of information from multiple sources. This notion is consistent with interpersonal deception theory (Buller & Burgoon, 1996), which asserts that both the enactment and detection of deception are cognitive demanding tasks and that deception detectors must deal with ongoing, rapidly changing information that may be incongruent or ambiguous. This study indicated that deception detection is intricate not only by showing the ways in which people integrate multiple sources of information, but also by showing that information relevant to deception detection is structured in a variety of ways that have not been addressed previously. Specifically, although this study, like prior research, found that individuals attempt to detect deception by searching for verbal and nonverbal signs indicative of deceit, it identified several additional substructures by which deception detection operates. Examples of such substructures were referred to as "constituents of deception," "analogies," and "explaining away" The first of these ("constituents of deception") indicates that detectors do not simply look for signs of deception; they look for deception itself. That is, detectors organize what they perceive as inconsistencies (e.g., "R said Dan was caring" but "R said Dan lied to her") differently from the way in which they organize information that they believe signals deceit. This distinction is important, because differences in the ways in which concepts are related to one another in a network can affect the degree to which such concepts become more or less salient for a communicator. It is also worth mentioning that some of the inconsistencies that were believed to constitute deceit were based, in part, on the participants' personal biases or stereotypes. For example, one participant thought that "R was lying" because "R claimed that Jack, a punker, drove recklessly, when, in fact, punkers don't drive reckless." That such personalized knowledge is used when attempting to explain behavior is consistent with previous research in the area of social cognition (e.g., see Read, 1987; Schank & Abelson, 1977; Wilensky, 1983), thereby supporting the notion that a social cognitive framework is a useful avenue for understanding deception detection. Moreover, the identification of "constituents" as a substructure for detecting deception indicates that the process of judging veracity cannot be understood fully without also understanding the role of personalized knowledge and the manner in which such knowledge is structured. A second substructure previously unidentified in deception research is the "analogy." This study indicated that detectors readily use themselves and, to some extent, others as references for judging the veracity of a third party. That is, detectors compare the way in which they behave when lying to the behavior of the person they are trying to detect. If the behaviors match, the detector is more likely to view the "detectee" as deceptive. It should be noted that the role of analogies in the process of attribution and causal reasoning has been noted elsewhere (e.g., Kelley, 1972; Read 1983, 1984; Read & Cesa, 1991). Moreover, the use of analogies indicates, once again, the theoretical importance of acknowledging the integral role that knowledge structures play in the perception of deception. People use stored knowledge about themselves and others to detect deception. A third previously neglected substructure is called "explaining away" and involves an alternative explanation for an inference or behavior. As discussed earlier, the theoretical implication of such alternative explanations is that they show how individuals are able to decode inconsistent behaviors (e.g., simultaneously observe behaviors that indicate deceptiveness and behaviors that indicate truthfulness) and still reach a conclusion about a person's veracity. This substructure also underlines the notion that deception detection is just one part of a larger attributional process. Specifically, in many participants' mental models, an attribution of deceptiveness was just one of many possible interpretations for observed behaviors. For example, in one person's mental model, "the target is being interviewed" and "the target is lying" competed to explain why "the target is fidgeting." Thus, this study indicates that the process of deception detection cannot be understood completely without an examination of the multiple interpretations competing within a person's mental model. Indeed, information that, at first glance, may seem unrelated to judgments of veracity (e.g., "S was tired"; "A lot of time has passed since what S is saying occurred"; "S is shocked he had an Asian girlfriend") may function as alternative interpretations that are used to "explain away" competing attributions of deceptiveness. The framework presented in this study provides a useful approach for understanding how such alternative interpretations become more coherent than others and, in turn, are used to make sense of social interactions. It is worth pointing out that when examining the content of these and the other substructures, this study deviates from the pattern of most previous deception research by showing that verbal information had a strong influence on veracity judgments. This is significant considering that ever since Mehrabian's (1972) claim about the preeminence of nonverbals, and Ekman and Friesen's (1969) assumption that deception is revealed more through verbal than nonverbal cues, deception scholars have presumed that detection judgments derive almost exclusively from nonverbals and have designed their studies accordingly. Because of this, little work has examined deceptive message designs (for notable exceptions, see Bavelas, Black, Chovil, & Mullett,1990; Burgoon, Buller, Guerrero, Afifi, & Feldman, 1996; McCornack, 1992). The results of this study point to the importance of research that considers the effects of both verbal and nonverbal information in the process of enacting and perceiving deception. In addition to indicating a number of substructures that influence the coherence of deception attributions, this study contributes to an understanding of the dynamic nature of deception detection. Past research (e.g., Buller & Burgoon,1996; Burgoon, 1996; Burgoon, Bullet, Ebesu, White, & Rockwell, 1996; Buller, Strzyzewski, & Comstock, 1989, 1991; Stiff & Miller, 1986) has conceptualized deception as a process, but the conceptualization of deceptiveness judgments as a process has been ignored. This study, however, addressed how and why people's attributions change over time. As they are exposed to new information, people add to and alter their mental models for judging veracity. Such changes can sway their perceptions so that a person who was once seen as deceptive can subsequently be viewed as truthful (or vice versa). This finding, and the finding that over half of the participants changed their attributions at least once while watching the stimulus tapes, contradicts past research that indicates that initial impressions of a deceiver tend to anchor and bias later attributions (see Zuckerman et al., 1987; Zuckerman, Koestner, Colella, & Alton, 1984; Zuckerman et al., 1981). It is consistent, however, with literature in social cognition (e.g., Read, 1987), which argues that observed behaviors, inferences, and knowledge can be used to transform collections of attributions. In addition to illustrating the dynamic nature of deception detection, this study supports Miller and Read's (1991) argument that Thagard's simulation provides an intriguing methodology to explore individual differences in the coherence of mental models. Indeed, whereas there were similarities in the substructures used by participants in this study, deception detection was highly idiosyncratic in other respects. First, the content of mental models varied considerably from person to person. In some cases, such content was surprisingly rich. For example, one participant believed that "R," the female stimulus target, said "Dan, an ex-boyfriend, was a liar" because "R was lying" and because "R was projecting." Another participant believed that "R was lying" because "R said Jack (a punker) was a good dancer" and "Punk rockers are not good dancers." It is worth noting that, as in the latter example, the idiosyncratic content in mental models was especially noticeable in the different knowledge structures that were activated by participants to judge veracity. To be sure, this study suggests that the role of knowledge in the process of deception detection is more extensive and complex than previously imagined. For instance, prior work has argued that knowledge gained from behavioral familiarity should affect perceptions of deception. This study showed that, in addition, personalized knowledge regarding the nature of reality (e.g., "Punkers don't drive reckless") is used in multiple ways to make attributions regarding another person's deceptiveness. Such detailed connections among specific inferences, knowledge, and data cannot be examined or understood as readily with previously used methods. Thus, a contribution of the framework in this study is that it enables a richer understanding of the specific sources of information used in the process of deception detection. Second, this study indicated that the coherence of mental models for judging veracity varied considerably from person to person. Thus, even when presented with the same set of facts (i.e., the same stimulus video), people constructed vastly different mental models. Moreover, such models differed even when participants made the same attribution. The power of Thagard's model is that it helps us understand why people differ in how they view deceptive/truthful communicators. The model suggests that to understand why people differ in their perceptions of detection, researchers must consider the specific ways in which particular individuals combine data, inference, and knowledge to construct meaning. Moreover, the study shows that attributions of deceptiveness may not depend on the observation of any particular behavior but rather on the configurational array within which such observations are embedded. Of course, the finding that individuals use vastly different types of information and generate idiosyncratic judgment models has implications for current research in deception; this study suggests that we may need to reconsider popular assumptions regarding cross-individual consistency in certain deception-related behaviors. Because the content and coherence of mental models appear to be so idiosyncratic, it may be impossible to find commonalities across individuals when examining deception detection at these levels of analysis. However, at a different level, because this study found that particular substructures (see above) were used by nearly all of the participants, it may be possible to make generalizations concerning the use of such substructures. Although such an analysis is beyond the scope of this study, future research might examine whether there are conditions under which the various substructures are used. By way of example, it is possible that people are required to expend more cognitive effort when using some substructures than when using others. Nonverbal signs, for instance, may activate attributions of deception almost automatically, whereas constituents of deception, which are often made up of contradictory verbal statements or verbal statements that contradict knowledge held by the detector, may require closer scrutiny to the potentially deceptive message. With this in mind, individual differences in communicator characteristics such as the need for cognition (see Petty & Cacioppo, 1986), which is associated with carefully scrutinizing incoming information, and conversational attentiveness (Burns, Seiter, & Miller, 1990), which is associated with increased attention to verbal communication, may also be associated with the use of the constituents' substructure. On the other hand, individuals who are low in the need for cognition and who are conversationally inattentive may rely more on signs to detect deception. Similarly, a communicator's motivations and/or ability to attend to a message may also affect the types of substructures he or she uses to detect deception. For example, when communicators are motivated to detect deception (e.g., they are led to be suspicious or believe they will be rewarded for successful detection), they may scrutinize messages more and, as a result, use more constituents than communicators who are not motivated. Moreover, when communicators are unable to attend to, or scrutinize, a message (e.g., they are distracted, they are involved in an interaction rather than simply observing it), they may rely on signs of deception more than other substructures. Finally, the degree to which a detector is familiar with a communication source may also affect the substructures used by the detectee. Specifically, because people possess more knowledge about intimates than they do about strangers, and because knowledge structures are more typical in some substructures (e.g., analogies, constituents, cohypotheses) than others (e.g., signs), it is possible that some substructures will be used more when judging the veracity of an intimate than when judging the veracity of a stranger. Future research is needed to explore these possible conditions of substructure use. Besides that, this study hints at a number of other potential avenues for future research on the topic of deception. First, future research should examine the relationship between concepts discussed in this study and accuracy at detecting deception. With this in mind, however, it should be noted that only particular concepts may be related to accuracy. For example, although it was shown that the strength of subjects' overall mental models predicted the inferences they made about veracity and the confidence with which they made such inferences, there is no reason to suspect that the strength of these mental models will be related to the accuracy with which individuals are able to detect deception. Specifically, when constructing a mental model of another person, an individual may rely on several types of information (i.e., nonverbal behaviors, stereotypes, and so forth). However, all of these types of information are not necessarily accurate indicators of deception (see Riggio & Friedman, 1983). Thus, to the extent that individuals use erroneous clues to deception, they can be inaccurate in their attribution even though their mental models for deceptiveness may be highly activated. On the other hand, it is possible that, when their content is also considered, the use of particular substructures may be related to accurate deception detection. For instance, as noted above, many of the "constituents of deception" identified in this study were composed of contradictory statements that the stimulus target made. Because such statements are by definition deception, when they are identified, they should promote accurate detection more than other types of structures. On the other hand, when "constituents of deception" are not composed of such contradictory statements, their use should not facilitate accurate detection. Second, further work is needed to explore the confidence with which individuals make judgments of veracity. Thagard (1989), for example, noted that each of the concepts in a network can be assigned certain weights before the ECHO program is run. Future work could therefore ask participants to assign their own values to the inferences they make on the basis of how confident they were about making the inferences. It is possible that such a procedure could enhance our ability to predict confidence. Third, future work should explore the effects of the order in which information is received and change on the construction of mental models. Specifically, does the order in which people receive information affect how likely they are to select one interpretation over another (Read & Miller, 1989)? Also, do individuals who change their attributions over the course of time make more accurate attributions? And if so, what types of changes lead to more accurate perceptions? As people's mental models develop further, are we better able to predict their attributions over time? How are models affected when new information is added? Clearly, Thagard's (1989) simulation provides a methodology that would be useful for answering such questions. For instance, the simulation makes it possible to examine hypothetically the ways in which information that is added to a system affects coherence. As Miller and Read (1991) noted, "If part of the network was changed (i.e., for example, if one belief was altered), the model may be able to predict the likely higher order structures around which the resulting model would cohere" (p. 87). In other words, by using coherence analysis, it might be possible to predict what concepts in a model would need to be altered or added to change a person's attribution regarding veracity. On the other side of the coin, and as alluded to earlier, the theory of explanatory coherence suggests a number of possibilities for future research on the enactment of deception. From this perspective, the primary task facing a deceiver who wants to be perceived as truthful is that he or she must try to construct the most coherent lie possible. How do individuals attempt to construct and tell coherent lies? The theory of explanatory coherence would argue that a deceiver must create and integrate facts that activate attributions of truthfulness, provide alternative and simple explanations for data that are explained by deceptiveness, and avoid contradictions. Clearly, the task of constructing a lie that appears truthful is a difficult one, and some individuals may be more skilled at it. For instance, do Machiavellians or socially skilled individuals tell more coherent lies, and, if so, are their stories believed more by others because of it? Are factors such as the spontaneity of the lie (see Cody, Marston, & Foster, 1984) related to the coherence or believability of the lie? Lastly, do people tell more coherent lies to those they know than to those who are strangers? Finally, contemporary work (Buller & Burgoon, 1996) has argued that a complete understanding of deception requires an examination of the ways in which deceivers and detectors mutually influence one another in dynamic, ongoing interactions. For example, interpersonal deception theory (Buller & Burgoon, 1996; Burgoon, 1994; Burgoon & Buller, 1994) asserts that a liar's behavior may cause a receiver to react suspiciously. Such behavior, in turn, may affect the liar's subsequent behavior. Although this study did not permit senders and receivers to interact, the framework presented can be used in future work to extend what is known about the interactional nature of deception detection. For example, the framework presented in this study would enable a researcher to examine (a) how the construction of a mental model leads a detector to be suspicious, (b) how the mental model affects the detector's subsequent behaviors (e.g., behaviors indicative of suspiciousness), (c) how the detector's behavior is used as new input in the mental model of the deceiver, and (d) how the amended model of the deceiver affects subsequent deceptive behavior. Before concluding, some limitations regarding the generalizability of this study must be mentioned. First, the lies told by the stimulus targets were not sanctioned as lies might be in the real world. Furthermore, although the lies were long and narrative in nature, the fact that they were told in front of a video camera and that stimulus targets were permitted to plan their lies might have affected how the lies were told. Third, because the deceptions created by stimulus subjects in this study could be characterized as fabrications, the results may not generalize to attributions made when other forms of deception are used (for a discussion of alternative forms of deception, see Buller & Burgoon, 1996). Fourth, because the majority of judges in this study were female, the results may only generalize to situations with similar populations. Fifth, although the experimenter made every effort to respond neutrally to participants' answers, there may have been experimenter expectancy effects, because the interviewer was not blind to the conditions in the study. It should be noted that although this study suggests that the analyses described above depict neural processes, self-reports, inferences, and the ECHO simulation may not be veridical indicators of neural activation. As Thagard (1989) noted, Despite ECHO's parallelism and use of vague neural metaphor of connections, I have not listed neural plausibility as one of its advantages, because current knowledge does not allow any sensible mapping from nodes of ECHO representing concepts to anything in the brain. (p. 458) Even so, it has been argued that ECHO's parallelism has advantages independent of the brain analogy (Thagard, 1986) and that, considering the neonatal state of the study of cognition, researchers should pursue the study of the mind using multiple methodologies.12 Finally, the stimulus tape-viewing conditions were artificial, particularly in cases where the tape was stopped to acquire attributional information. Specifically, even though participants did not seem to have difficulty producing the data, and even though participants were instructed to indicate to the interviewer the thoughts that they had had while watching the videotaped targets, stopping the tapes was intrusive and therefore may have resulted in more elaborated mental models than are present in a typical interaction. Clearly, collecting on-line data during the course of an interaction presents methodological problems that should be addressed in future research. In the meantime, it should be noted that this study may be more generalizable to situations where individuals are given time to carefully consider the veracity of the communication they observe (e.g., during breaks in courtroom deliberations). The conceptualization of deception detection advanced in this study differs considerably from the ways in which it has been viewed previously. This study portrays it as a dynamic and changing process by which individuals attempt to integrate, structure, and make sense of an array of inferences, knowledge, and verbal and nonverbal behaviors. As such, it provides a richer understanding of how and why people decide whether someone is lying or behaving honestly. It is hoped that the current study represents only a beginning for examinations of this type, because the ways in which people attempt to make sense of others around them is integral not only for an understanding of the process of deception detection but also for an understanding of the process of communication itself. [IMAGE TABLE] Captioned as: APPENDIX A Captioned as: APPENDIX B Captioned as: APPENDIX C [IMAGE TABLE] Captioned as: APPENDIX D NOTES 1. It is interesting to note that previous research (e.g., Riggio & Friedman, 1983) has demonstrated that these scripts are not very accurate (i.e., behaviors used to detect deception are not generally the same behaviors that are enacted when telling lies). 2. Interview time between each video segment and for each participant varied considerably, with some participants giving several responses and others few responses. Data regarding specific amounts of time were not collected. My impressions are that interview segments ranged somewhere between 2 and 10 minutes and averaged about 5 to 6 minutes. 3. For further discussion of ECHO and its implementation of the principles of explanatory coherence, see Thagard (1989) and Miller and Read (1991). 4. The activation (aj) of a concept (]) is updated using the following equation: a ^subj^(t + 1) = a ^subj^(t)(1 - d) + enet ^subj^[max - a^subj^(t)] + inet^subj^[a ^subj^(t) - min]. Here d is a decay parameter that decrements each concept at every cycle, enet, is the net excitatory input and ineti is the net inhibitory input. enet,is equal to Sigmaw^subij^, a^subi^(t) if w^subij^ > 0, and inet^subj^ is equal to Sigmaw^sub4^a^subi^(t) if w^subij^ < 0, where w^subij^ is the weight between concepts i and j. max is the maximum activation value possible (1) and min is the minimum activation value possible (-1). The degree to which a concept becomes activated depends on the number of concepts connected to it, the strength and valence of the links, and the activation level of connected concepts (see Miller & Read, 1991; Thagard, 1989). 5. It is important to note that scores indicating which concepts were most activated in a mental model were derived independently from judgments of veracity. 6. Analogies lead to increased activation for those concepts involved in the analogy. 7. Because participants' observations and inferences were collected in matrix form (see Method section), a network analysis of the substructures and content within participants' mental models was possible. 8. Despite the idiosyncratic nature of the specific content of concepts, it should be noted that, consistent with previous research, there were consistencies regarding the general nature of signs participants used. For instance, all of the participants used inferences and verbal and nonverbal cues as signs of deceptiveness. Of those used, 34% of cues were nonverbal in nature (e.g., "S's legs were shaking"), 25% were verbal in nature (e.g., "R's answers were vague"), and 41% were inferences made about the targets or the targets' stories (e.g., "R is very uncomfortable"). 9. Although an examination of accuracy was beyond the scope of this study, because they may be of interest to deception scholars, results of accuracy tests are presented here. To determine participants' accuracy, their dichotomous attributions of targets' veracity (i.e., "the target is lying" or "the target is telling the truth") were checked against the targets' actual behavior. Overall, 66 participants were accurate in their judgments, whereas 54 were inaccurate. A series of phi coefficients were computed to assess the effects of viewing condition, target sex, and target veracity on partiapants' accuracy Results indicated that the accuracy of participants who were interviewed (48% were accurate) was not significantly different from the accuracy of participants who viewed the stimulus tapes nonstop (42% were accurate) (phi = .067, chi^sup2^ = .539, df = 1, p > .05). Moreover, targets were not judged more accurately when lying (33% of deceptive targets were judged accurately) than when telling the truth (50% of truthful targets were judged accurately) (phi = .17, chi^sup2^ = 3.43, df = 1, p > .05), and the female target was not judged more accurately than the male target (female was judged accurately in 40% of the cases and male was judged accurately in 45% of the cases) (phi = .05, chi^sup2^ = .307, df = 1, p > .05). 10. Because Miller and Read (1991) argued that negatively activated concepts in a network fade away, whereas positively activated concepts are used to make attributions, it made theoretical sense to use participants' most positively activated concept scores to test Hypothesis 2. Indeed, because positively activated concepts presumably guide the attribution process, the level at which such concepts are activated should predict attributional confidence. 11. As an application and illustration of parallel distributed processing, this study should also be of interest to scholars in the field of cognitive science. The issue here centers around whether traditional models of cognition that require vast numbers of microsteps if implemented sequentially can provide a plausible account of the rapid nature of human thought (for further discussion, see McClelland, Rumelhart, & Hinton, 1986). This study illustrates that parallel distributed processing models may provide a more reasonable account of information processing by showing how a number of different pieces of information may be kept in mind at once and simultaneously influence one another. 12. For additional limitations of the ECHO simulation, refer to Thagard (1989). REFERENCES Bavelas, J. B., Black, A., Chovil, N., & Mullett, J. (1990). Equivocal communication. Newbury Park, CA: Sage. Bond, C. F Jr., Kahler, K. N., & Paolicelli, L. M. (1985). The miscommunication of deception: An adaptive approach. Journal of Experimental Social Psychology, 21, 331-345. Brandt, D. R., Miller, G. R, & Hocking, J. E. (1980). The truth deception attribution: Effects of familiarity on the ability of observers to detect deception. Human Communication Research, 6, 99-108. Buller, D. B., & Burgoon, J. K. (1996). Interpersonal deception theory. Communication Theory, 6(3), 203-242. Buller, D. B, Strzyzewski, K D., & Comstock, J. (1989, November). The efficacy of probing as a deception detection strategy: The effect of suspicion and knowledge of the source. Paper presented at the annual meeting of the Speech Communication Association, San Franciso. Buller, D. B., Strzyzewski, K. D., & Comstock, J. (1991). Interpersonal deception I: Deceivers' reactions to receivers' suspicions and probing. Communication Monographs, 58,1-24. Buller, D. B., & Walther, J. B. (1989, February). Deception in established relationships: Application of schema theory. Paper presented at the annual meeting of the Western Speech Communication Association, Spokane, WA. Burgoon, J. K. (1994). Prologue to special issue: Interpersonal deception. Journal of Language and Social Psychology, 13(4), 357-365. Burgoon, J. K., & Buller, D. B. (1994). Interpersonal deception: Ill. Effects of deceit on perceived communication and nonverbal behavior dynamics. Journal of Nonverbal Behavior, 18,155-184. Burgoon, J. K., Buller, D. B., Ebesu, A. S., & Rockwell, P (1994). Interpersonal deception: V Accuracy in deception detection. Communication Monographs, 61(4), 303-325. Burgoon, J. K., Buller, D. B., Ebesu, A. S., White, C. H., & Rockwell, P (1996). Testing interpersonal deception theory: Effects of suspicion on communication behaviors and perceptions. Communication Theory, 6(3), 243-267. Burgoon, J. K., Buller, D. B., Guerrero, L. K., Afifi, W. A., & Feldman, C. M. (1996). Interpersonal deception: XII. Information management dimensions underlying deceptive and truthful messages. Communication Monographs, 63(1), 50-69. Burns, D. M., Seiter, J. S., & Miller, L. C. (1990, February). Who remembers best what others say? The assessment of conversational attentiveness. Paper presented at the annual meeting of the Western Speech Communication Association, Phoenix, AZ. Cody, M. J., Marston, P J., & Foster, M. (1984, May). Paralinguistic and verbal leakage of deception as a function of attempted control and timing of questions. Paper presented at the annual meeting of the International Communication Association, San Francisco. Comadena, M. E. (1982). Accuracy in detecting deception: Intimate and friendship relationships. In M. Burgoon (Ed.), Communication yearbook 6 (pp. 446-472). Beverly Hills, CA: Sage. Cupach, W R, & Metts, S. (1986). Accounts of relational dissolution: A comparison of marital and non-marital relationships. Communication Monographs, 53, 311-334. Ekman, P, & Friesen, W V. (1969). Nonverbal leakage and clues to deception. Psychiatry, 32, 88-105. Fiedler, K., & Walka, I. (1993). Training lie detectors to use nonverbal cues instead of global heuristics. Human Communication Research, 20,199-223. Gordon, R A., Baxter, J. C., Rozelle, R M., & Druckman, D. (1987). Expectations of honest, evasive, and deceptive nonverbal behavior. Journal of Social Psychology, 127, 231-233. Hale, J. S., & Stiff, J. B. (1990). Nonverbal primacy in veracity judgments. Communication Reports, 3, 75-83. Harvey, J. H., Agostinelli, G., & Weber, A. L. (1989). Account making and the formation of expectations about close relationships. In C. Hendrick (Ed.), Close relationships (pp. 3942). Newbury Park, CA: Sage. Harvey, J. H., Wells, G. L., & Alvarez, M. D. (1987). Attribution in the context of conflict and separation in close relationships. In J. H. Harvey & R E Kidd (Eds.), New directions in attribution research (pp. 235-260). Hillsdale, NJ: Lawrence Erlbaum. Kelley, H. H. (1972). Attribution in social interaction. In E. E. Jones, D. E. Kanouse, H. H. Kelley, R E. Nisbett, S. Valins, & B. Weiner (Eds.), Attribution: Perceiving the causes of behavior (pp. 151-174). Morristown, NJ: General Learning Press. Kintsch, W (1988). The role of knowledge in discourse comprehension: A construction-inte gration model. Psychological Review, 95, 163-182. Kraut, R E. (1978). Verbal and nonverbal cues in the perception of lying. Journal of Personality and Social Psychology, 36, 380-391. Kraut, R E., & Poe, D. (1980). Behavioral roots of person perception: The deception judgments of customs inspectors and laymen. Journal of Personality and Social Psychology, 39, 784-798. Maier, N.RE & Janzen, J. C. (1967). Reliability of reasons used in making judgments of honesty and dishonesty. Perceptual and Motor Skills, 25,141-151. McClelland, J. L., Rumelhart, D. E., & Hinton, G. E. (1986). The appeal of parallel distributed processing. In D. E. Rumelhart, J. L. McClelland, dc the PDP Research Group, Parallel distributed processing: Explorations in the microstructure of cognition (Vol. 1, pp. 3-44). Cambridge: MIT Press. McCornack, S. A. (1992). Information manipulation theory. Communication Monographs, 59, 1-16. McCornack, S. A., & Parks, M. R (1985). Deception detection and relationship development: The other side of trust. In M. McLaughlin (Ed.), Communication yearbook 9. Beverly Hills, CA: Sage. Mehrabian, A. (1972). Nonverbal communication. Chicago: Aldine-Atherton. Miller, L. C., & Read, S. J. (1991). On the coherence of mental models of persons and relationships: A knowledge structure approach. In GJ.O. Fletcher & E Fincham (Eds.), Cognition in close relationships (pp. 69-99). Hillsdale, NJ: Lawrence Erlbaum. O'Sullivan, M., Ekman, P., & Friesen, W V. (1988). The effect of comparisons on detecting deceit. Journal of Nonverbal Behavior, 12, 203-216. Petty, R. E., & Cacioppo, J. T. (1986). Communication and persuasion: Central and peripheral routes to attitude change. New York: Springer-Verlag. Read, S. J. (1983). Once is enough: Causal reasoning from a single instance. Journal of Personality and Social Psychology, 45, 323-334. Read, S. J. (1984). Analogical reasoning in social judgment: The importance of causal theories. Journal of Personality and Social Psychology, 46, 4-25. Read, S. J. (1987). Constructing causal scenarios: A knowledge structure approach to causal reasoning. Journal of Personality and Social Psychology, 52, 288-302. Read, S. J. (1992). Constructing accounts: The role of explanatory coherence. In M. L. McLaughlin, M. J. Cody, & S. J. Read (Eds.), Explaining one's self to others: Reason-giving in a social context (pp. 3-19). Hillsdale, NJ: Lawrence Erlbaum. Read, S. J., & Cesa, I. L. (1991). This reminds me of a time when . . .: Expectation failures in reminding and explanation. Journal of Experimental and Social Psychology, 27,11-25. Read, S. J., & Miller, L. C. (1989). Explanatory coherence in understanding persons, interactions, and relationships. Behavioral and Brain Sciences, 12, 485-486. Riggio, R E., & Friedman, H. S. (1983). Individual differences and cues to deception. Journal of Personality and Social Psychology, 45, 899-915. Riggio, R E., Tucker, J., & Widaman, K. E (1987). Verbal and nonverbal cues as mediators of deception ability. Journal of Nonverbal Behavior, Il, 126-143. Rumelhart, D. E. (1984). Schemata and the cognitive system. In R S. Wyer Jr., & T. K Srull (Eds.), Handbook of social cognition (Vol. 1, pp. 161-185). Hillsdale, NJ: Lawrence Erlbaum. Rumelhart, D. E., & McClelland, J. L. (1986). Parallel distributed processing: Explorations in the microstructure of cognition. Vol. 1: Foundations. Cambridge: MIT Press. Schank, R. C., & Abelson, R P (1977). Scripts, plans, goals and understanding. Hillsdale, NJ: Lawrence Erlbaum. Seiter,J. S., &Wiseman, R. L. (1995). Ethnicity and deception detection. Journal of the Northwest Communication Association, 23, 24-38. Stiff, J. B., Hale, J. L., Garlick, R. & Rogan, R G. (1990). Effect of cue incongruence and social normative influences on individual judgments of honesty and deceit. Southern Communication Journal, 55,192-205. Stiff, J. B., & Miller, G. R (1984, May). Deceptive behaviors and behaviors which are interpreted as deceptive: An interactive approach to the study of deception. Paper presented at the annual meeting of the International Communication Association, San Francisco. Stiff, J. B., & Miller, G. R (1986). "Come to think of it . . .": Interrogative probes, deceptive communication, and deception detection. Human Communication Research, 12, 339-357. Streeter, L. A., Krauss, R M., Geller, V, Olson, C., & Apple, W. (1977). Pitch change during attempted deception. Journal of Personality and Social Psychology, 35, 345-350. Thagard, P (1986). Parallel computation and the mind-body problem. Cognitive Science, 10, 301-318. Thagard, P (1989). Explanatory coherence. Behavioral and Brain Sciences, 12, 435-467. Toris, C., & DePaulo, B. M. (1985). Effects of actual deception and suspiciousness of deception on interpersonal perception. Journal of Personality and Social Psychology, 47,1063-1073. Wilensky, R (1983). Planning and understanding: A computational approach to human reasoning. Reading, MA: Addison-Wesley. Zuckerman, M., Fischer, S. A., Osmun, R. W., Winkler, B. A., & Wolfson, L. R (1987). Anchoring in lie detection revisited. Journal of Nonverbal Behavior, 11, 4-12. Zuckerman, M., Koestner, R., Colella, M. J., & Alton, A. O. (1984). Anchoring in the detection of deception and leakage. Journal of Personality and Social Psychology, 47, 301-311. Zuckerman, M., Koestner, R, & Driver, R. E. (1981). Beliefs about cues associated with deception. Journal of Nonverbal Behavior, 6, 105-114. John S. Setter (PhD, University of Southern California, 1993) is an assistant professor in the Department of Languages, Philosophy and Speech Communication at Utah State University. A previous version of this article was presented at the National Communication Association's convention in New Orleans, 1994. This article was developed from part of the author's dissertation directed by Lynn C. Miller at the University of Southern California. The author would like to thank the following people for their invaluable contributions toward the completion of this project: Lynn C. Miller; Michael Cody; Steven Read; Scott Smith; Verlaine McDonald; Susan Avanzino; Kerry Osborne; Tom Hollihan; Gwen Brown; Jon Bruschke; Harold Kinzer; Cindy Gallois; Steven McComack; one anonymous reviewer; Nancy Birch; and Rhonda, Delores, and Debora Seiter. Please address correspondence concerning this article to John S. Seiter, Department of Languages, Philosophy and Speech Communication, Utah State University, Logan, UT 84322-0720; phone: (435) 797-0138; e-mail: jsseiter@cc.usu.edu. Reproduced with permission of the copyright owner. Further reproduction or distribution is prohibited without permission. =============================== End of Document ================================