Abstract
This study investigated the attitudes of people toward texts generated by large language models (LLMs), such as social networking service posts and news comments. Recently, the number of people viewing texts generated by LLMs has increased. Because an LLM can generate natural texts that are almost indistinguishable from those written by humans, there is a concern that generating such natural texts may cause problems such as maliciously guiding public opinion. To evaluate the reception of LLM-generated texts, we conducted an experiment based on the hypothesis that the knowledge that a text was generated by an LLM would influence user acceptance. We controlled whether the user was aware that the text had been generated by an LLM, and assessed their viewpoints from four perspectives: perceived friendliness, trustworthiness, empathy, and reference.
The result suggested that a generated comment that imitated the opinion of an expert increased in rank when it was disclosed that the LLM generated the comment. In particular, "reliability" and "informative" were sensitive to this disclosure, while "familiar" and "empathy" were not.
Information
Book title
12th International Conference on Informatics, Electronics & Vision
Date of presentation
2025/05/27
Citation
Nanase Mogi, Megumi Yasuo, Yutaka Morino, Mitsunori Matsushita. Analysis of the changes in the attitude of the news comments caused by knowing that the comments were generated by a large language model, 12th International Conference on Informatics, Electronics & Vision.