Abstract
Since the release of ChatGPT, heated discussions have focused on the acceptable uses of generative artificial intelligence (GenAI) in education, science, and business practices. A salient question in these debates pertains to perceptions of the extent to which creators contribute to the co-produced output. As the current research establishes, the answer to this question depends on the evaluation target. Nine studies (seven preregistered, total N = 4498) document that people evaluate their own contributions to co-produced outputs with ChatGPT as higher than those of others. This systematic self–other difference stems from differential inferences regarding types of GenAI usage behavior: People think that they predominantly use GenAI for inspiration, but others use it to outsource work. These self–other differences in turn have direct ramifications for GenAI acceptability perceptions, such that usage is considered more acceptable for the self than for others. The authors discuss the implications of these findings for science, education, and marketing.
| Original language | English |
|---|---|
| Pages (from-to) | 496-512 |
| Number of pages | 17 |
| Journal | International Journal of Research in Marketing |
| Volume | 41 |
| Issue number | 3 |
| Early online date | 29 May 2024 |
| DOIs | |
| Publication status | Published - Sept 2024 |
Bibliographical note
Publisher Copyright:© 2024 The Authors
Fingerprint
Dive into the research topics of 'Acceptability lies in the eye of the beholder: Self-other biases in GenAI collaborations'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver