What readers think about computer-generated texts

An experimental study carried out by LMU media researchers has found that readers rate texts generated by algorithms more credible than texts written by real journalists.

Readers like to read texts generated by computers, especially when they are unaware that what they are reading was assembled on the basis of an algorithm. This, at any rate, is the conclusion suggested by the results of an experiment recently conducted by LMU media researchers. In the study, 986 subjects were asked to read and evaluate online news stories. Articles which the participants believed to have been written by journalists were consistently given higher marks for readability, credibility and journalistic expertise than those that were flagged as computer-generated – even in cases where the real “author” was in fact a computer.

Several media outlets already regularly publish texts put together by computer programs. Perhaps the best known of those that have adopted the practice - sometimes dubbed ‘robot journalism’ - is the well-known news agency Associated Press. German publishers have also begun to make use of algorithms to compile texts. At the moment, these are most likely to turn up on the sports pages and in the financial section, as news reports in these fields tend to be based on source data that are already structured in predictable ways.

Dr. Andreas Graefe and Professor Hans-Bernd Brosius at LMU’s Department for Communication Studies and Media Research (IfKW) have now investigated how readers perceive and respond to news stories generated by computers. The results of their study appear in the latest issue of “ Journalism ”. Graefe and colleagues chose two texts from the online editions of popular German news outlets. One was a report of a soccer match, the other was devoted to the market performance of shares issued by an automotive supplier. In addition, they used an algorithm developed at the Fraunhofer Institute for Communication, Information Processing and Ergonomics to generate texts on the same subjects.

Each participant in the study was then given a sports text and a business text to read, together with a note stating whether they had been written by a journalist or a computer program. What the experimental subjects did not know was that, in some cases, the information given in these notes was deliberately misleading, i.e. untrue.

When they analyzed the results of the experiment, the LMU researchers discovered that their study population found articles actually or putatively written by humans to be more readable than computer-generated texts. In spite of this preference, however, the latter were judged to be more credible than the stories actually written by journalists. This second finding surprised even the designers of the experiment. “The automatically generated texts are full of facts and figures - and the figures are listed to two decimal places. We believe that this impression of precision strongly contributes to the perception that they are more trustworthy,” says Mario Haim of the IfKW, one of the authors of the paper. However, with respect to readability, readers always rated articles attributed to real journalists more favorably – even when the attribution was false. “To explain this finding, we assume that readers’ expectations differ depending on whether they believe the text to have been written by a person or a machine, and that this preconception influences their perception of the text concerned,” says Haim. A more critical approach to computer-based texts might also result from the fact that readers have little experience with such reports. Overall, however, the differences in assessment of the two types of text were relatively small. “We would argue that this suggests that brief, computer-generated texts dealing with sporting events or business and finance are already very appealing to readers,” Haim concludes.