TY - JOUR
T1 - A Personal Model of Trumpery
T2 - Linguistic Deception Detection in a Real-World High-Stakes Setting
AU - Van Der Zee, Sophie
AU - Poppe, Ronald
AU - Havrileck, Alice
AU - Baillon, Aurélien
N1 - Acknowledgments:
We thank the Washington Post Fact Checker team for providing their fact-checked data set of Trump?s communications, Benjamin Tereick for methodological suggestions, and Jozien Bensing and Annelies Vredeveldt for providing feedback on the manuscript. For a website discussing the themes of this research, see https://www.apersonalmodeloftrumpery.com/.
Publisher Copyright:
© The Author(s) 2021.
PY - 2021/12/21
Y1 - 2021/12/21
N2 - Language use differs between truthful and deceptive statements, but not all differences are consistent across people and contexts, complicating the identification of deceit in individuals. By relying on fact-checked tweets, we showed in three studies (Study 1: 469 tweets; Study 2: 484 tweets; Study 3: 24 models) how well personalized linguistic deception detection performs by developing the first deception model tailored to an individual: the 45th U.S. president. First, we found substantial linguistic differences between factually correct and factually incorrect tweets. We developed a quantitative model and achieved 73% overall accuracy. Second, we tested out-of-sample prediction and achieved 74% overall accuracy. Third, we compared our personalized model with linguistic models previously reported in the literature. Our model outperformed existing models by 5 percentage points, demonstrating the added value of personalized linguistic analysis in real-world settings. Our results indicate that factually incorrect tweets by the U.S. president are not random mistakes of the sender.
AB - Language use differs between truthful and deceptive statements, but not all differences are consistent across people and contexts, complicating the identification of deceit in individuals. By relying on fact-checked tweets, we showed in three studies (Study 1: 469 tweets; Study 2: 484 tweets; Study 3: 24 models) how well personalized linguistic deception detection performs by developing the first deception model tailored to an individual: the 45th U.S. president. First, we found substantial linguistic differences between factually correct and factually incorrect tweets. We developed a quantitative model and achieved 73% overall accuracy. Second, we tested out-of-sample prediction and achieved 74% overall accuracy. Third, we compared our personalized model with linguistic models previously reported in the literature. Our model outperformed existing models by 5 percentage points, demonstrating the added value of personalized linguistic analysis in real-world settings. Our results indicate that factually incorrect tweets by the U.S. president are not random mistakes of the sender.
UR - http://www.scopus.com/inward/record.url?scp=85122056749&partnerID=8YFLogxK
U2 - 10.1177/09567976211015941
DO - 10.1177/09567976211015941
M3 - Article
C2 - 34932410
AN - SCOPUS:85122056749
VL - 33
SP - 3
EP - 17
JO - Psychological Science
JF - Psychological Science
SN - 0956-7976
IS - 1
ER -